Next Issue
Volume 5, June
Previous Issue
Volume 4, December
 
 

AI, Volume 5, Issue 1 (March 2024) – 22 articles

Cover Story (view full-size image): Artificial intelligence is rapidly reshaping the landscape of modern orthodontics. From enhanced diagnostics to personalized treatment planning, outcome prediction and retention monitoring, AI is driving unprecedented efficiency and precision in orthodontic care. This review highlights the transformative potential of AI in orthodontics, signaling a paradigm shift towards evidence-based, patient-centered and face-driven treatment approaches. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 27782 KiB  
Article
Trust-Aware Reflective Control for Fault-Resilient Dynamic Task Response in Human–Swarm Cooperation
by Yibei Guo, Yijiang Pang, Joseph Lyons, Michael Lewis, Katia Sycara and Rui Liu
AI 2024, 5(1), 446-464; https://doi.org/10.3390/ai5010022 - 21 Mar 2024
Viewed by 1258
Abstract
Due to the complexity of real-world deployments, a robot swarm is required to dynamically respond to tasks such as tracking multiple vehicles and continuously searching for victims. Frequent task assignments eliminate the need for system calibration time, but they also introduce uncertainty from [...] Read more.
Due to the complexity of real-world deployments, a robot swarm is required to dynamically respond to tasks such as tracking multiple vehicles and continuously searching for victims. Frequent task assignments eliminate the need for system calibration time, but they also introduce uncertainty from previous tasks, which can undermine swarm performance. Therefore, responding to dynamic tasks presents a significant challenge for a robot swarm compared to handling tasks one at a time. In human–human cooperation, trust plays a crucial role in understanding each other’s performance expectations and adjusting one’s behavior for better cooperation. Taking inspiration from human trust, this paper introduces a trust-aware reflective control method called “Trust-R”. Trust-R, based on a weighted mean subsequence reduced algorithm (WMSR) and human trust modeling, enables a swarm to self-reflect on its performance from a human perspective. It proactively corrects faulty behaviors at an early stage before human intervention, mitigating the negative influence of uncertainty accumulated from dynamic tasks. Three typical task scenarios {Scenario 1: flocking to the assigned destination; Scenario 2: a transition between destinations; and Scenario 3: emergent response} were designed in the real-gravity simulation environment, and a human user study with 145 volunteers was conducted. Trust-R significantly improves both swarm performance and trust in dynamic task scenarios, marking a pivotal step forward in integrating trust dynamics into swarm robotics. Full article
Show Figures

Figure 1

20 pages, 6807 KiB  
Article
Single Image Super Resolution Using Deep Residual Learning
by Moiz Hassan, Kandasamy Illanko and Xavier N. Fernando
AI 2024, 5(1), 426-445; https://doi.org/10.3390/ai5010021 - 21 Mar 2024
Viewed by 1186
Abstract
Single Image Super Resolution (SSIR) is an intriguing research topic in computer vision where the goal is to create high-resolution images from low-resolution ones using innovative techniques. SSIR has numerous applications in fields such as medical/satellite imaging, remote target identification and autonomous vehicles. [...] Read more.
Single Image Super Resolution (SSIR) is an intriguing research topic in computer vision where the goal is to create high-resolution images from low-resolution ones using innovative techniques. SSIR has numerous applications in fields such as medical/satellite imaging, remote target identification and autonomous vehicles. Compared to interpolation based traditional approaches, deep learning techniques have recently gained attention in SISR due to their superior performance and computational efficiency. This article proposes an Autoencoder based Deep Learning Model for SSIR. The down-sampling part of the Autoencoder mainly uses 3 by 3 convolution and has no subsampling layers. The up-sampling part uses transpose convolution and residual connections from the down sampling part. The model is trained using a subset of the VILRC ImageNet database as well as the RealSR database. Quantitative metrics such as PSNR and SSIM are found to be as high as 76.06 and 0.93 in our testing. We also used qualitative measures such as perceptual quality. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Image Processing and Computer Vision)
Show Figures

Figure 1

21 pages, 683 KiB  
Review
Few-Shot Fine-Grained Image Classification: A Comprehensive Review
by Jie Ren, Changmiao Li, Yaohui An, Weichuan Zhang and Changming Sun
AI 2024, 5(1), 405-425; https://doi.org/10.3390/ai5010020 - 06 Mar 2024
Viewed by 1267
Abstract
Few-shot fine-grained image classification (FSFGIC) methods refer to the classification of images (e.g., birds, flowers, and airplanes) belonging to different subclasses of the same species by a small number of labeled samples. Through feature representation learning, FSFGIC methods can make better use of [...] Read more.
Few-shot fine-grained image classification (FSFGIC) methods refer to the classification of images (e.g., birds, flowers, and airplanes) belonging to different subclasses of the same species by a small number of labeled samples. Through feature representation learning, FSFGIC methods can make better use of limited sample information, learn more discriminative feature representations, greatly improve the classification accuracy and generalization ability, and thus achieve better results in FSFGIC tasks. In this paper, starting from the definition of FSFGIC, a taxonomy of feature representation learning for FSFGIC is proposed. According to this taxonomy, we discuss key issues on FSFGIC (including data augmentation, local and/or global deep feature representation learning, class representation learning, and task-specific feature representation learning). In addition, the existing popular datasets, current challenges and future development trends of feature representation learning on FSFGIC are also described. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Image Processing and Computer Vision)
Show Figures

Figure 1

22 pages, 423 KiB  
Review
A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring
by Elham Albaroudi, Taha Mansouri and Ali Alameer
AI 2024, 5(1), 383-404; https://doi.org/10.3390/ai5010019 - 07 Feb 2024
Viewed by 5090
Abstract
The study comprehensively reviews artificial intelligence (AI) techniques for addressing algorithmic bias in job hiring. More businesses are using AI in curriculum vitae (CV) screening. While the move improves efficiency in the recruitment process, it is vulnerable to biases, which have adverse effects [...] Read more.
The study comprehensively reviews artificial intelligence (AI) techniques for addressing algorithmic bias in job hiring. More businesses are using AI in curriculum vitae (CV) screening. While the move improves efficiency in the recruitment process, it is vulnerable to biases, which have adverse effects on organizations and the broader society. This research aims to analyze case studies on AI hiring to demonstrate both successful implementations and instances of bias. It also seeks to evaluate the impact of algorithmic bias and the strategies to mitigate it. The basic design of the study entails undertaking a systematic review of existing literature and research studies that focus on artificial intelligence techniques employed to mitigate bias in hiring. The results demonstrate that the correction of the vector space and data augmentation are effective natural language processing (NLP) and deep learning techniques for mitigating algorithmic bias in hiring. The findings underscore the potential of artificial intelligence techniques in promoting fairness and diversity in the hiring process with the application of artificial intelligence techniques. The study contributes to human resource practice by enhancing hiring algorithms’ fairness. It recommends the need for collaboration between machines and humans to enhance the fairness of the hiring process. The results can help AI developers make algorithmic changes needed to enhance fairness in AI-driven tools. This will enable the development of ethical hiring tools, contributing to fairness in society. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

19 pages, 4499 KiB  
Article
Automated Classification of User Needs for Beginner User Experience Designers: A Kano Model and Text Analysis Approach Using Deep Learning
by Zhejun Zhang, Huiying Chen, Ruonan Huang, Lihong Zhu, Shengling Ma, Larry Leifer and Wei Liu
AI 2024, 5(1), 364-382; https://doi.org/10.3390/ai5010018 - 02 Feb 2024
Viewed by 1199
Abstract
This study introduces a novel tool for classifying user needs in user experience (UX) design, specifically tailored for beginners, with potential applications in education. The tool employs the Kano model, text analysis, and deep learning to classify user needs efficiently into four categories. [...] Read more.
This study introduces a novel tool for classifying user needs in user experience (UX) design, specifically tailored for beginners, with potential applications in education. The tool employs the Kano model, text analysis, and deep learning to classify user needs efficiently into four categories. The data for the study were collected through interviews and web crawling, yielding 19 user needs from Generation Z users (born between 1995 and 2009) of LEGO toys (Billund, Denmark). These needs were then categorized into must-be, one-dimensional, attractive, and indifferent needs through a Kano-based questionnaire survey. A dataset of over 3000 online comments was created through preprocessing and annotating, which was used to train and evaluate seven deep learning models. The most effective model, the Recurrent Convolutional Neural Network (RCNN), was employed to develop a graphical text classification tool that accurately outputs the corresponding category and probability of user input text according to the Kano model. A usability test compared the tool’s performance to the traditional affinity diagram method. The tool outperformed the affinity diagram method in six dimensions and outperformed three qualities of the User Experience Questionnaire (UEQ), indicating a superior UX. The tool also demonstrated a lower perceived workload, as measured using the NASA Task Load Index (NASA-TLX), and received a positive Net Promoter Score (NPS) of 23 from the participants. These findings underscore the potential of this tool as a valuable educational resource in UX design courses. It offers students a more efficient and engaging and less burdensome learning experience while seamlessly integrating artificial intelligence into UX design education. This study provides UX design beginners with a practical and intuitive tool, facilitating a deeper understanding of user needs and innovative design strategies. Full article
(This article belongs to the Topic Machine Learning in Internet of Things)
Show Figures

Figure 1

22 pages, 2007 KiB  
Article
New Convolutional Neural Network and Graph Convolutional Network-Based Architecture for AI Applications in Alzheimer’s Disease and Dementia-Stage Classification
by Md Easin Hasan and Amy Wagler
AI 2024, 5(1), 342-363; https://doi.org/10.3390/ai5010017 - 01 Feb 2024
Viewed by 1310
Abstract
Neuroimaging experts in biotech industries can benefit from using cutting-edge artificial intelligence techniques for Alzheimer’s disease (AD)- and dementia-stage prediction, even though it is difficult to anticipate the precise stage of dementia and AD. Therefore, we propose a cutting-edge, computer-assisted method based on [...] Read more.
Neuroimaging experts in biotech industries can benefit from using cutting-edge artificial intelligence techniques for Alzheimer’s disease (AD)- and dementia-stage prediction, even though it is difficult to anticipate the precise stage of dementia and AD. Therefore, we propose a cutting-edge, computer-assisted method based on an advanced deep learning algorithm to differentiate between people with varying degrees of dementia, including healthy, very mild dementia, mild dementia, and moderate dementia classes. In this paper, four separate models were developed for classifying different dementia stages: convolutional neural networks (CNNs) built from scratch, pre-trained VGG16 with additional convolutional layers, graph convolutional networks (GCNs), and CNN-GCN models. The CNNs were implemented, and then the flattened layer output was fed to the GCN classifier, resulting in the proposed CNN-GCN architecture. A total of 6400 whole-brain magnetic resonance imaging scans were obtained from the Alzheimer’s Disease Neuroimaging Initiative database to train and evaluate the proposed methods. We applied the 5-fold cross-validation (CV) technique for all the models. We presented the results from the best fold out of the five folds in assessing the performance of the models developed in this study. Hence, for the best fold of the 5-fold CV, the above-mentioned models achieved an overall accuracy of 43.83%, 71.17%, 99.06%, and 100%, respectively. The CNN-GCN model, in particular, demonstrates excellent performance in classifying different stages of dementia. Understanding the stages of dementia can assist biotech industry researchers in uncovering molecular markers and pathways connected with each stage. Full article
Show Figures

Figure 1

18 pages, 3623 KiB  
Article
Convolutional Neural Networks in the Diagnosis of Colon Adenocarcinoma
by Marco Leo, Pierluigi Carcagnì, Luca Signore, Francesco Corcione, Giulio Benincasa, Mikko O. Laukkanen and Cosimo Distante
AI 2024, 5(1), 324-341; https://doi.org/10.3390/ai5010016 - 29 Jan 2024
Viewed by 1056
Abstract
Colorectal cancer is one of the most lethal cancers because of late diagnosis and challenges in the selection of therapy options. The histopathological diagnosis of colon adenocarcinoma is hindered by poor reproducibility and a lack of standard examination protocols required for appropriate treatment [...] Read more.
Colorectal cancer is one of the most lethal cancers because of late diagnosis and challenges in the selection of therapy options. The histopathological diagnosis of colon adenocarcinoma is hindered by poor reproducibility and a lack of standard examination protocols required for appropriate treatment decisions. In the current study, using state-of-the-art approaches on benchmark datasets, we analyzed different architectures and ensembling strategies to develop the most efficient network combinations to improve binary and ternary classification. We propose an innovative two-stage pipeline approach to diagnose colon adenocarcinoma grading from histological images in a similar manner to a pathologist. The glandular regions were first segmented by a transformer architecture with subsequent classification using a convolutional neural network (CNN) ensemble, which markedly improved the learning efficiency and shortened the learning time. Moreover, we prepared and published a dataset for clinical validation of the developed artificial neural network, which suggested the discovery of novel histological phenotypic alterations in adenocarcinoma sections that could have prognostic value. Therefore, AI could markedly improve the reproducibility, efficiency, and accuracy of colon cancer diagnosis, which are required for precision medicine to personalize the treatment of cancer patients. Full article
Show Figures

Figure 1

34 pages, 1970 KiB  
Review
Forging the Future: Strategic Approaches to Quantum AI Integration for Industry Transformation
by Meng-Leong How and Sin-Mei Cheah
AI 2024, 5(1), 290-323; https://doi.org/10.3390/ai5010015 - 29 Jan 2024
Viewed by 3862
Abstract
The fusion of quantum computing and artificial intelligence (AI) heralds a transformative era for Industry 4.0, offering unprecedented capabilities and challenges. This paper delves into the intricacies of quantum AI, its potential impact on Industry 4.0, and the necessary change management and innovation [...] Read more.
The fusion of quantum computing and artificial intelligence (AI) heralds a transformative era for Industry 4.0, offering unprecedented capabilities and challenges. This paper delves into the intricacies of quantum AI, its potential impact on Industry 4.0, and the necessary change management and innovation strategies for seamless integration. Drawing from theoretical insights and real-world case studies, we explore the current landscape of quantum AI, its foreseeable influence, and the implications for organizational strategy. We further expound on traditional change management tactics, emphasizing the importance of continuous learning, ecosystem collaborations, and proactive approaches. By examining successful and failed quantum AI implementations, lessons are derived to guide future endeavors. Conclusively, the paper underscores the imperative of being proactive in embracing quantum AI innovations, advocating for strategic foresight, interdisciplinary collaboration, and robust risk management. Through a comprehensive exploration, this paper aims to equip stakeholders with the knowledge and strategies to navigate the complexities of quantum AI in Industry 4.0, emphasizing its transformative potential and the necessity for preparedness and adaptability. Full article
Show Figures

Figure 1

31 pages, 4849 KiB  
Article
MultiWave-Net: An Optimized Spatiotemporal Network for Abnormal Action Recognition Using Wavelet-Based Channel Augmentation
by Ramez M. Elmasry, Mohamed A. Abd El Ghany, Mohammed A.-M. Salem and Omar M. Fahmy
AI 2024, 5(1), 259-289; https://doi.org/10.3390/ai5010014 - 24 Jan 2024
Viewed by 1040
Abstract
Human behavior is regarded as one of the most complex notions present nowadays, due to the large magnitude of possibilities. These behaviors and actions can be distinguished as normal and abnormal. However, abnormal behavior is a vast spectrum, so in this work, abnormal [...] Read more.
Human behavior is regarded as one of the most complex notions present nowadays, due to the large magnitude of possibilities. These behaviors and actions can be distinguished as normal and abnormal. However, abnormal behavior is a vast spectrum, so in this work, abnormal behavior is regarded as human aggression or in another context when car accidents occur on the road. As this behavior can negatively affect the surrounding traffic participants, such as vehicles and other pedestrians, it is crucial to monitor such behavior. Given the current prevalent spread of cameras everywhere with different types, they can be used to classify and monitor such behavior. Accordingly, this work proposes a new optimized model based on a novel integrated wavelet-based channel augmentation unit for classifying human behavior in various scenes, having a total number of trainable parameters of 5.3 m with an average inference time of 0.09 s. The model has been trained and evaluated on four public datasets: Real Live Violence Situations (RLVS), Highway Incident Detection (HWID), Movie Fights, and Hockey Fights. The proposed technique achieved accuracies in the range of 92% to 99.5% across the used benchmark datasets. Comprehensive analysis and comparisons between different versions of the model and the state-of-the-art have been performed to confirm the model’s performance in terms of accuracy and efficiency. The proposed model has higher accuracy with an average of 4.97%, and higher efficiency by reducing the number of parameters by around 139.1 m compared to other models trained and tested on the same benchmark datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Image Processing and Computer Vision)
Show Figures

Figure 1

22 pages, 8560 KiB  
Article
Enhancing Thermo-Acoustic Waste Heat Recovery through Machine Learning: A Comparative Analysis of Artificial Neural Network–Particle Swarm Optimization, Adaptive Neuro Fuzzy Inference System, and Artificial Neural Network Models
by Miniyenkosi Ngcukayitobi, Lagouge Kwanda Tartibu and Flávio Bannwart
AI 2024, 5(1), 237-258; https://doi.org/10.3390/ai5010013 - 19 Jan 2024
Viewed by 928
Abstract
Waste heat recovery stands out as a promising technique for tackling both energy shortages and environmental pollution. Currently, this valuable resource, generated through processes like fuel combustion or chemical reactions, is often dissipated into the environment, despite its potential to significantly contribute to [...] Read more.
Waste heat recovery stands out as a promising technique for tackling both energy shortages and environmental pollution. Currently, this valuable resource, generated through processes like fuel combustion or chemical reactions, is often dissipated into the environment, despite its potential to significantly contribute to the economy. To harness this untapped potential, a traveling-wave thermo-acoustic generator has been designed and subjected to comprehensive experimental analysis. Fifty-two data corresponding to different working conditions of the system were extracted to build ANN, ANFIS, and ANN-PSO models. Evaluation of performance metrics reveals that the ANN-PSO model demonstrates the highest predictive accuracy (R2=0.9959), particularly in relation to output voltage. This research demonstrates the potential of machine learning techniques for the analysis of thermo-acoustic systems. In doing so, it is possible to obtain an insight into nonlinearities inherent to thermo-acoustic systems. This advancement empowers researchers to forecast the performance characteristics of alternative configurations with a heightened level of precision. Full article
Show Figures

Figure 1

29 pages, 5173 KiB  
Article
Bibliometric Mining of Research Trends in Machine Learning
by Lars Lundberg, Martin Boldt, Anton Borg and Håkan Grahn
AI 2024, 5(1), 208-236; https://doi.org/10.3390/ai5010012 - 19 Jan 2024
Cited by 1 | Viewed by 1154
Abstract
We present a method, including tool support, for bibliometric mining of trends in large and dynamic research areas. The method is applied to the machine learning research area for the years 2013 to 2022. A total number of 398,782 documents from Scopus were [...] Read more.
We present a method, including tool support, for bibliometric mining of trends in large and dynamic research areas. The method is applied to the machine learning research area for the years 2013 to 2022. A total number of 398,782 documents from Scopus were analyzed. A taxonomy containing 26 research directions within machine learning was defined by four experts with the help of a Python program and existing taxonomies. The trends in terms of productivity, growth rate, and citations were analyzed for the research directions in the taxonomy. Our results show that the two directions, Applications and Algorithms, are the largest, and that the direction Convolutional Neural Networks is the one that grows the fastest and has the highest average number of citations per document. It also turns out that there is a clear correlation between the growth rate and the average number of citations per document, i.e., documents in fast-growing research directions have more citations. The trends for machine learning research in four geographic regions (North America, Europe, the BRICS countries, and The Rest of the World) were also analyzed. The number of documents during the time period considered is approximately the same for all regions. BRICS has the highest growth rate, and, on average, North America has the highest number of citations per document. Using our tool and method, we expect that one could perform a similar study in some other large and dynamic research area in a relatively short time. Full article
Show Figures

Figure 1

13 pages, 4445 KiB  
Article
Audio-Based Emotion Recognition Using Self-Supervised Learning on an Engineered Feature Space
by Peranut Nimitsurachat and Peter Washington
AI 2024, 5(1), 195-207; https://doi.org/10.3390/ai5010011 - 17 Jan 2024
Viewed by 1490
Abstract
Emotion recognition models using audio input data can enable the development of interactive systems with applications in mental healthcare, marketing, gaming, and social media analysis. While the field of affective computing using audio data is rich, a major barrier to achieve consistently high-performance [...] Read more.
Emotion recognition models using audio input data can enable the development of interactive systems with applications in mental healthcare, marketing, gaming, and social media analysis. While the field of affective computing using audio data is rich, a major barrier to achieve consistently high-performance models is the paucity of available training labels. Self-supervised learning (SSL) is a family of methods which can learn despite a scarcity of supervised labels by predicting properties of the data itself. To understand the utility of self-supervised learning for audio-based emotion recognition, we have applied self-supervised learning pre-training to the classification of emotions from the CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU- MOSEI)’s acoustic data. Unlike prior papers that have experimented with raw acoustic data, our technique has been applied to encoded acoustic data with 74 parameters of distinctive audio features at discrete timesteps. Our model is first pre-trained to uncover the randomly masked timestamps of the acoustic data. The pre-trained model is then fine-tuned using a small sample of annotated data. The performance of the final model is then evaluated via overall mean absolute error (MAE), mean absolute error (MAE) per emotion, overall four-class accuracy, and four-class accuracy per emotion. These metrics are compared against a baseline deep learning model with an identical backbone architecture. We find that self-supervised learning consistently improves the performance of the model across all metrics, especially when the number of annotated data points in the fine-tuning step is small. Furthermore, we quantify the behaviors of the self-supervised model and its convergence as the amount of annotated data increases. This work characterizes the utility of self-supervised learning for affective computing, demonstrating that self-supervised learning is most useful when the number of training examples is small and that the effect is most pronounced for emotions which are easier to classify such as happy, sad, and angry. This work further demonstrates that self-supervised learning still improves performance when applied to the embedded feature representations rather than the traditional approach of pre-training on the raw input space. Full article
Show Figures

Figure 1

18 pages, 2923 KiB  
Article
Secure Internet Financial Transactions: A Framework Integrating Multi-Factor Authentication and Machine Learning
by AlsharifHasan Mohamad Aburbeian and Manuel Fernández-Veiga
AI 2024, 5(1), 177-194; https://doi.org/10.3390/ai5010010 - 10 Jan 2024
Viewed by 1750
Abstract
Securing online financial transactions has become a critical concern in an era where financial services are becoming more and more digital. The transition to digital platforms for conducting daily transactions exposed customers to possible risks from cybercriminals. This study proposed a framework that [...] Read more.
Securing online financial transactions has become a critical concern in an era where financial services are becoming more and more digital. The transition to digital platforms for conducting daily transactions exposed customers to possible risks from cybercriminals. This study proposed a framework that combines multi-factor authentication and machine learning to increase the safety of online financial transactions. Our methodology is based on using two layers of security. The first layer incorporates two factors to authenticate users. The second layer utilizes a machine learning component, which is triggered when the system detects a potential fraud. This machine learning layer employs facial recognition as a decisive authentication factor for further protection. To build the machine learning model, four supervised classifiers were tested: logistic regression, decision trees, random forest, and naive Bayes. The results showed that the accuracy of each classifier was 97.938%, 97.881%, 96.717%, and 92.354%, respectively. This study’s superiority is due to its methodology, which integrates machine learning as an embedded layer in a multi-factor authentication framework to address usability, efficacy, and the dynamic nature of various e-commerce platform features. With the evolving financial landscape, a continuous exploration of authentication factors and datasets to enhance and adapt security measures will be considered in future work. Full article
Show Figures

Figure 1

19 pages, 1720 KiB  
Review
AI and Face-Driven Orthodontics: A Scoping Review of Digital Advances in Diagnosis and Treatment Planning
by Juraj Tomášik, Márton Zsoldos, Ľubica Oravcová, Michaela Lifková, Gabriela Pavleová, Martin Strunga and Andrej Thurzo
AI 2024, 5(1), 158-176; https://doi.org/10.3390/ai5010009 - 05 Jan 2024
Cited by 1 | Viewed by 2646
Abstract
In the age of artificial intelligence (AI), technological progress is changing established workflows and enabling some basic routines to be updated. In dentistry, the patient’s face is a crucial part of treatment planning, although it has always been difficult to grasp in an [...] Read more.
In the age of artificial intelligence (AI), technological progress is changing established workflows and enabling some basic routines to be updated. In dentistry, the patient’s face is a crucial part of treatment planning, although it has always been difficult to grasp in an analytical way. This review highlights the current digital advances that, thanks to AI tools, allow us to implement facial features beyond symmetry and proportionality and incorporate facial analysis into diagnosis and treatment planning in orthodontics. A Scopus literature search was conducted to identify the topics with the greatest research potential within digital orthodontics over the last five years. The most researched and cited topic was artificial intelligence and its applications in orthodontics. Apart from automated 2D or 3D cephalometric analysis, AI finds its application in facial analysis, decision-making algorithms as well as in the evaluation of treatment progress and retention. Together with AI, other digital advances are shaping the face of today’s orthodontics. Without any doubts, the era of “old” orthodontics is at its end, and modern, face-driven orthodontics is on the way to becoming a reality in modern orthodontic practices. Full article
Show Figures

Figure 1

22 pages, 1122 KiB  
Article
Statistically Significant Differences in AI Support Levels for Project Management between SMEs and Large Enterprises
by Polona Tominc, Dijana Oreški, Vesna Čančer and Maja Rožman
AI 2024, 5(1), 136-157; https://doi.org/10.3390/ai5010008 - 05 Jan 2024
Viewed by 1714
Abstract
Background: This article delves into an in-depth analysis of the statistically significant differences in AI support levels for project management between SMEs and large enterprises. The research was conducted based on a comprehensive survey encompassing a sample of 473 SMEs and large Slovenian [...] Read more.
Background: This article delves into an in-depth analysis of the statistically significant differences in AI support levels for project management between SMEs and large enterprises. The research was conducted based on a comprehensive survey encompassing a sample of 473 SMEs and large Slovenian enterprises. Methods: To validate the observed differences, statistical analysis, specifically the Mann–Whitney U test, was employed. Results: The results confirm the presence of statistically significant differences between SMEs and large enterprises across multiple dimensions of AI support in project management. Large enterprises exhibit on average a higher level of AI adoption across all five AI utilization dimensions. Specifically, large enterprises scored significantly higher (p < 0.05) in AI adopting strategies and in adopting AI technologies for project tasks and team creation. This study’s findings also underscored the significant differences (p < 0.05) between SMEs and large enterprises in their adoption and utilization of AI technologies for project management purposes. While large enterprises scored above 4 for several dimensions, with the highest average score assessed (mean value 4.46 on 1 to 5 scale) for the usage of predictive Analytics Tools to improve the work on the project, SMEs’ average levels, on the other hand, were all below 4. SMEs in particular may lag in incorporating AI into various project activities due to several factors such as resource constraints, limited access to AI expertise, or risk aversion. Conclusions: The results underscore the need for targeted strategies to enhance AI adoption in SMEs and leverage its benefits for successful project implementation and strengthen the company’s competitiveness. Full article
Show Figures

Figure 1

21 pages, 7476 KiB  
Article
A Flower Pollination Algorithm-Optimized Wavelet Transform and Deep CNN for Analyzing Binaural Beats and Anxiety
by Devika Rankhambe, Bharati Sanjay Ainapure, Bhargav Appasani and Amitkumar V. Jha
AI 2024, 5(1), 115-135; https://doi.org/10.3390/ai5010007 - 29 Dec 2023
Viewed by 1089
Abstract
Binaural beats are a low-frequency form of acoustic stimulation that may be heard between 200 and 900 Hz and can help reduce anxiety as well as alter other psychological situations and states by affecting mood and cognitive function. However, prior research has only [...] Read more.
Binaural beats are a low-frequency form of acoustic stimulation that may be heard between 200 and 900 Hz and can help reduce anxiety as well as alter other psychological situations and states by affecting mood and cognitive function. However, prior research has only looked at the impact of binaural beats on state and trait anxiety using the STA-I scale; the level of anxiety has not yet been evaluated, and for the removal of artifacts the improper selection of wavelet parameters reduced the original signal energy. Hence, in this research, the level of anxiety when hearing binaural beats has been analyzed using a novel optimized wavelet transform in which optimized wavelet parameters are extracted from the EEG signal using the flower pollination algorithm, whereby artifacts are removed effectively from the EEG signal. Thus, EEG signals have five types of brainwaves in the existing models, which have not been analyzed optimally for brainwaves other than delta waves nor has the level of anxiety yet been analyzed using binaural beats. To overcome this, deep convolutional neural network (CNN)-based signal processing has been proposed. In this, deep features are extracted from optimized EEG signal parameters, which are precisely selected and adjusted to their most efficient values using the flower pollination algorithm, ensuring minimal signal energy reduction and artifact removal to maintain the integrity of the original EEG signal during analysis. These features provide the accurate classification of various levels of anxiety, which provides more accurate results for the effects of binaural beats on anxiety from brainwaves. Finally, the proposed model is implemented in the Python platform, and the obtained results demonstrate its efficacy. The proposed optimized wavelet transform using deep CNN-based signal processing outperforms existing techniques such as KNN, SVM, LDA, and Narrow-ANN, with a high accuracy of 0.99%, precision of 0.99%, recall of 0.99%, F1-score of 0.99%, specificity of 0.999%, and error rate of 0.01%. Thus, the optimized wavelet transform with a deep CNN can perform an effective decomposition of EEG data and extract deep features related to anxiety to analyze the effect of binaural beats on anxiety levels. Full article
Show Figures

Figure 1

24 pages, 399 KiB  
Article
Optimized Financial Planning: Integrating Individual and Cooperative Budgeting Models with LLM Recommendations
by I. de Zarzà, J. de Curtò, Gemma Roig and Carlos T. Calafate
AI 2024, 5(1), 91-114; https://doi.org/10.3390/ai5010006 - 25 Dec 2023
Viewed by 2986
Abstract
In today’s complex economic environment, individuals and households alike grapple with the challenge of financial planning. This paper introduces novel methodologies for both individual and cooperative (household) financial budgeting. We firstly propose an optimization framework for individual budget allocation, aiming to maximize savings [...] Read more.
In today’s complex economic environment, individuals and households alike grapple with the challenge of financial planning. This paper introduces novel methodologies for both individual and cooperative (household) financial budgeting. We firstly propose an optimization framework for individual budget allocation, aiming to maximize savings by efficiently distributing monthly income among various expense categories. We then extend this model to households, wherein the complexity of handling multiple incomes and shared expenses is addressed. The cooperative model prioritizes not only maximized savings but also the preferences and needs of each member, fostering a harmonious financial environment, whether they are short-term needs or long-term aspirations. A notable innovation in our approach is the integration of recommendations from a large language model (LLM). Given its vast training data and potent inferential capabilities, the LLM provides initial feasible solutions to our optimization problems, acting as a guiding beacon for individuals and households unfamiliar with the nuances of financial planning. Our preliminary results indicate that the LLM-recommended solutions result in budget plans that are both economically sound, meaning that they are consistent with established financial management principles and promote fiscal resilience and stability, and aligned with the financial goals and preferences of the concerned parties. This integration of AI-driven recommendations with econometric models, as an instantiation of an extended coevolutionary (EC) theory, paves the way for a new era in financial planning, making it more accessible and effective for a wider audience, as we propose an example of a new theory in economics where human behavior can be greatly influenced by AI agents. Full article
Show Figures

Figure 1

19 pages, 7710 KiB  
Article
Application of YOLOv8 and Detectron2 for Bullet Hole Detection and Score Calculation from Shooting Cards
by Marya Butt, Nick Glas, Jaimy Monsuur, Ruben Stoop and Ander de Keijzer
AI 2024, 5(1), 72-90; https://doi.org/10.3390/ai5010005 - 22 Dec 2023
Viewed by 3049
Abstract
Scoring targets in shooting sports is a crucial and time-consuming task that relies on manually counting bullet holes. This paper introduces an automatic score detection model using object detection techniques. The study contributes to the field of computer vision by comparing the performance [...] Read more.
Scoring targets in shooting sports is a crucial and time-consuming task that relies on manually counting bullet holes. This paper introduces an automatic score detection model using object detection techniques. The study contributes to the field of computer vision by comparing the performance of seven models (belonging to two different architectural setups) and by making the dataset publicly available. Another value-added aspect is the inclusion of three variants of the object detection model, YOLOv8, recently released in 2023 (at the time of writing). Five of the used models are single-shot detectors, while two belong to the two-shot detectors category. The dataset was manually captured from the shooting range and expanded by generating more versatile data using Python code. Before the dataset was trained to develop models, it was resized (640 × 640) and augmented using Roboflow API. The trained models were then assessed on the test dataset, and their performance was compared using matrices like mAP50, mAP50-90, precision, and recall. The results showed that YOLOv8 models can detect multiple objects with good confidence scores. Among these models, YOLOv8m performed the best, with the highest mAP50 value of 96.7%, followed by the performance of YOLOv8s with the mAP50 value of 96.5%. It is suggested that if the system is to be implemented in a real-time environment, YOLOv8s is a better choice since it took significantly less inference time (2.3 ms) than YOLOv8m (5.7 ms) and yet generated a competitive mAP50 of 96.5%. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

17 pages, 798 KiB  
Review
Data Science in Finance: Challenges and Opportunities
by Xianrong Zheng, Elizabeth Gildea, Sheng Chai, Tongxiao Zhang and Shuxi Wang
AI 2024, 5(1), 55-71; https://doi.org/10.3390/ai5010004 - 22 Dec 2023
Viewed by 1992
Abstract
Data science has become increasingly popular due to emerging technologies, including generative AI, big data, deep learning, etc. It can provide insights from data that are hard to determine from a human perspective. Data science in finance helps to provide more personal and [...] Read more.
Data science has become increasingly popular due to emerging technologies, including generative AI, big data, deep learning, etc. It can provide insights from data that are hard to determine from a human perspective. Data science in finance helps to provide more personal and safer experiences for customers and develop cutting-edge solutions for a company. This paper surveys the challenges and opportunities in applying data science to finance. It provides a state-of-the-art review of financial technologies, algorithmic trading, and fraud detection. Also, the paper identifies two research topics. One is how to use generative AI in algorithmic trading. The other is how to apply it to fraud detection. Last but not least, the paper discusses the challenges posed by generative AI, such as the ethical considerations, potential biases, and data security. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

17 pages, 1667 KiB  
Review
AI Advancements: Comparison of Innovative Techniques
by Hamed Taherdoost and Mitra Madanchian
AI 2024, 5(1), 38-54; https://doi.org/10.3390/ai5010003 - 20 Dec 2023
Viewed by 2153
Abstract
In recent years, artificial intelligence (AI) has seen remarkable advancements, stretching the limits of what is possible and opening up new frontiers. This comparative review investigates the evolving landscape of AI advancements, providing a thorough exploration of innovative techniques that have shaped the [...] Read more.
In recent years, artificial intelligence (AI) has seen remarkable advancements, stretching the limits of what is possible and opening up new frontiers. This comparative review investigates the evolving landscape of AI advancements, providing a thorough exploration of innovative techniques that have shaped the field. Beginning with the fundamentals of AI, including traditional machine learning and the transition to data-driven approaches, the narrative progresses through core AI techniques such as reinforcement learning, generative adversarial networks, transfer learning, and neuroevolution. The significance of explainable AI (XAI) is emphasized in this review, which also explores the intersection of quantum computing and AI. The review delves into the potential transformative effects of quantum technologies on AI advancements and highlights the challenges associated with their integration. Ethical considerations in AI, including discussions on bias, fairness, transparency, and regulatory frameworks, are also addressed. This review aims to contribute to a deeper understanding of the rapidly evolving field of AI. Reinforcement learning, generative adversarial networks, and transfer learning lead AI research, with a growing emphasis on transparency. Neuroevolution and quantum AI, though less studied, show potential for future developments. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

21 pages, 5738 KiB  
Article
A Time Series Approach to Smart City Transformation: The Problem of Air Pollution in Brescia
by Elena Pagano and Enrico Barbierato
AI 2024, 5(1), 17-37; https://doi.org/10.3390/ai5010002 - 20 Dec 2023
Viewed by 1100
Abstract
Air pollution is a paramount issue, influenced by a combination of natural and anthropogenic sources, various diffusion modes, and profound repercussions for the environment and human health. Herein, the power of time series data becomes evident, as it proves indispensable for capturing pollutant [...] Read more.
Air pollution is a paramount issue, influenced by a combination of natural and anthropogenic sources, various diffusion modes, and profound repercussions for the environment and human health. Herein, the power of time series data becomes evident, as it proves indispensable for capturing pollutant concentrations over time. These data unveil critical insights, including trends, seasonal and cyclical patterns, and the crucial property of stationarity. Brescia, a town located in Northern Italy, faces the pressing challenge of air pollution. To enhance its status as a smart city and address this concern effectively, statistical methods employed in time series analysis play a pivotal role. This article is dedicated to examining how ARIMA and LSTM models can empower Brescia as a smart city by fitting and forecasting specific pollution forms. These models have established themselves as effective tools for predicting future pollution levels. Notably, the intricate nature of the phenomena becomes apparent through the high variability of particulate matter. Even during extraordinary events like the COVID-19 lockdown, where substantial reductions in emissions were observed, the analysis revealed that this reduction did not proportionally decrease PM2.5 and PM10 concentrations. This underscores the complex nature of the issue and the need for advanced data-driven solutions to make Brescia a truly smart city. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

16 pages, 545 KiB  
Article
A Time Window Analysis for Time-Critical Decision Systems with Applications on Sports Climbing
by Heiko Oppel and Michael Munz
AI 2024, 5(1), 1-16; https://doi.org/10.3390/ai5010001 - 19 Dec 2023
Viewed by 907
Abstract
Human monitoring systems are already utilized in various fields like assisted living, healthcare or sport and fitness. They are able to support in everyday life or act as a pre-warning system. We developed a system to monitor the ascent of a sport climber. [...] Read more.
Human monitoring systems are already utilized in various fields like assisted living, healthcare or sport and fitness. They are able to support in everyday life or act as a pre-warning system. We developed a system to monitor the ascent of a sport climber. It is integrated in a belay device. This paper presents the first time series analysis regarding the fall of a climber utilizing such a system. A Convolutional Neural Network handles the feature engineering part of the sensor information as well as the classification of the task at hand. In this way, the time is implicitly considered by the network. An analysis regarding the size of the time window was carried out with a focus on exploring the respective results. The neural network models were then tested against an already-existing principle based on a mechanical mechanism. We show that the size of the time window is a decisive factor in a time critical system. Depending on the size of the window, the mechanical principle was able to outperform the neural network. Nevertheless, most of our models outperformed the basic principle and returned promising results in predicting the fall of a climber within up to 91.8 ms. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop