Next Issue
Volume 12, September
Previous Issue
Volume 12, July
 
 

Computers, Volume 12, Issue 8 (August 2023) – 21 articles

Cover Story (view full-size image): Since the adoption of connected and automated vehicles is not expected to happen instantly, not all its elements are going to be connected at the early deployment stages. We consider a scenario where vehicles approaching a traffic light are connected to each other, but the traffic light itself is not cooperative. Information about indented trajectories such as decisions on how and when to accelerate, decelerate and stop, is communicated among the vehicles involved. We provide an optimization-based procedure for the efficient and safe passing of traffic lights or other temporary road blockages using vehicle-to-vehicle communication. We locally optimize objectives that promote efficiency such as less deceleration and larger minimum velocity, while maintaining safety. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
13 pages, 2908 KiB  
Article
Brain Pathology Classification of MR Images Using Machine Learning Techniques
by Nehad T. A. Ramaha, Ruaa M. Mahmood, Alaa Ali Hameed, Norma Latif Fitriyani, Ganjar Alfian and Muhammad Syafrudin
Computers 2023, 12(8), 167; https://doi.org/10.3390/computers12080167 - 19 Aug 2023
Viewed by 1551
Abstract
A brain tumor is essentially a collection of aberrant tissues, so it is crucial to classify tumors of the brain using MRI before beginning therapy. Tumor segmentation and classification from brain MRI scans using machine learning techniques are widely recognized as challenging and [...] Read more.
A brain tumor is essentially a collection of aberrant tissues, so it is crucial to classify tumors of the brain using MRI before beginning therapy. Tumor segmentation and classification from brain MRI scans using machine learning techniques are widely recognized as challenging and important tasks. The potential applications of machine learning in diagnostics, preoperative planning, and postoperative evaluations are substantial. Accurate determination of the tumor’s location on a brain MRI is of paramount importance. The advancement of precise machine learning classifiers and other technologies will enable doctors to detect malignancies without requiring invasive procedures on patients. Pre-processing, skull stripping, and tumor segmentation are the steps involved in detecting a brain tumor and measurement (size and form). After a certain period, CNN models get overfitted because of the large number of training images used to train them. That is why this study uses deep CNN to transfer learning. CNN-based Relu architecture and SVM with fused retrieved features via HOG and LPB are used to classify brain MRI tumors (glioma or meningioma). The method’s efficacy is measured in terms of precision, recall, F-measure, and accuracy. This study showed that the accuracy of the SVM with combined LBP with HOG is 97%, and the deep CNN is 98%. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

19 pages, 3766 KiB  
Article
Enhancing Data Security: A Cutting-Edge Approach Utilizing Protein Chains in Cryptography and Steganography
by Noura A. Mawla and Hussein K. Khafaji
Computers 2023, 12(8), 166; https://doi.org/10.3390/computers12080166 - 19 Aug 2023
Cited by 1 | Viewed by 1644
Abstract
Nowadays, with the increase in cyber-attacks, hacking, and data theft, maintaining data security and confidentiality is of paramount importance. Several techniques are used in cryptography and steganography to ensure their safety during the transfer of information between the two parties without interference from [...] Read more.
Nowadays, with the increase in cyber-attacks, hacking, and data theft, maintaining data security and confidentiality is of paramount importance. Several techniques are used in cryptography and steganography to ensure their safety during the transfer of information between the two parties without interference from an unauthorized third party. This paper proposes a modern approach to cryptography and steganography based on exploiting a new environment: bases and protein chains used to encrypt and hide sensitive data. The protein bases are used to form a cipher key whose length is twice the length of the data to be encrypted. During the encryption process, the plain data and the cipher key are represented in several forms, including hexadecimal and binary representation, and several arithmetic operations are performed on them, in addition to the use of logic gates in the encryption process to increase encrypted data randomness. As for the protein chains, they are used as a cover to hide the encrypted data. The process of hiding inside the protein bases will be performed in a sophisticated manner that is undetectable by statistical analysis methods, where each byte will be fragmented into three groups of bits in a special order, and each group will be included in one specific protein base that will be allocated to this group only, depending on the classifications of bits that have been previously stored in special databases. Each byte of the encrypted data will be hidden in three protein bases, and these protein bases will be distributed randomly over the protein chain, depending on an equation designed for this purpose. The advantages of these proposed algorithms are that they are fast in encrypting and hiding data, scalable, i.e., insensitive to the size of plain data, and lossless algorithms. The experiments showed that the proposed cryptography algorithm outperforms the most recent algorithms in terms of entropy and correlation values that reach −0.6778 and 7.99941, and the proposed steganography algorithm has the highest payload of 2.666 among five well-known hiding algorithms that used DNA sequences as the cover of the data. Full article
(This article belongs to the Special Issue Using New Technologies on Cyber Security Solutions)
Show Figures

Figure 1

19 pages, 5924 KiB  
Article
Pm2.5 Time Series Imputation with Deep Learning and Interpolation
by Anibal Flores, Hugo Tito-Chura, Deymor Centty-Villafuerte and Alejandro Ecos-Espino
Computers 2023, 12(8), 165; https://doi.org/10.3390/computers12080165 - 16 Aug 2023
Cited by 1 | Viewed by 1799
Abstract
Commonly, regression for time series imputation has been implemented directly through regression models, statistical, machine learning, and deep learning techniques. In this work, a novel approach is proposed based on a classification model that determines the NA value class, and from this, two [...] Read more.
Commonly, regression for time series imputation has been implemented directly through regression models, statistical, machine learning, and deep learning techniques. In this work, a novel approach is proposed based on a classification model that determines the NA value class, and from this, two types of interpolations are implemented: polynomial or flipped polynomial. An hourly pm2.5 time series from Ilo City in southern Peru was chosen as a study case. The results obtained show that for gaps of one NA value, the proposal in most cases presents superior results to techniques such as ARIMA, LSTM, BiLSTM, GRU, and BiGRU; thus, on average, in terms of R2, the proposal exceeds implemented benchmark models by between 2.4341% and 19.96%. Finally, supported by the results, it can be stated that the proposal constitutes a good alternative for short-gaps imputation in pm2.5 time series. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

17 pages, 2020 KiB  
Article
Requirement Change Prediction Model for Small Software Systems
by Rida Fatima, Furkh Zeshan, Adnan Ahmad, Muhamamd Hamid, Imen Filali, Amel Ali Alhussan and Hanaa A. Abdallah
Computers 2023, 12(8), 164; https://doi.org/10.3390/computers12080164 - 14 Aug 2023
Viewed by 1050
Abstract
The software industry plays a vital role in driving technological advancements. Software projects are complex and consist of many components, so change is unavoidable in these projects. The change in software requirements must be predicted early to preserve resources, since it can lead [...] Read more.
The software industry plays a vital role in driving technological advancements. Software projects are complex and consist of many components, so change is unavoidable in these projects. The change in software requirements must be predicted early to preserve resources, since it can lead to project failures. This work focuses on small-scale software systems in which requirements are changed gradually. The work provides a probabilistic prediction model, which predicts the probability of changes in software requirement specifications. The first part of the work considers analyzing the changes in software requirements due to certain variables with the help of stakeholders, developers, and experts by the questionnaire method. Then, the proposed model incorporates their knowledge in the Bayesian network as conditional probabilities of independent and dependent variables. The proposed approach utilizes the variable elimination method to obtain the posterior probability of the revisions in the software requirement document. The model was evaluated by sensitivity analysis and comparison methods. For a given dataset, the proposed model computed the low state revisions probability to 0.42, and the high state revisions probability to 0.45. Thus, the results proved that the proposed approach can predict the change in the requirements document accurately by outperforming existing models. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

20 pages, 4705 KiB  
Article
Timing and Performance Metrics for TWR-K70F120M Device
by George K. Adam
Computers 2023, 12(8), 163; https://doi.org/10.3390/computers12080163 - 14 Aug 2023
Viewed by 907
Abstract
Currently, single-board computers (SBCs) are sufficiently powerful to run real-time operating systems (RTOSs) and applications. The purpose of this research was to investigate the timing performance of an NXP TWR-K70F120M device with μClinux OS on concurrently running tasks with real-time features and constraints, [...] Read more.
Currently, single-board computers (SBCs) are sufficiently powerful to run real-time operating systems (RTOSs) and applications. The purpose of this research was to investigate the timing performance of an NXP TWR-K70F120M device with μClinux OS on concurrently running tasks with real-time features and constraints, and provide new and distinct technical data not yet available in the literature. Towards this goal, a custom-built multithreaded application with specific compute-intensive sorting and matrix operations was developed and applied to obtain measurements in specific timing metrics, including task execution time, thread waiting time, and response time. In this way, this research extends the literature by documenting performance results on specific timing metrics. The performance of this device was additionally benchmarked and validated against commonly used platforms, a Raspberry Pi4 and BeagleBone AI SBCs. The experimental results showed that this device stands well both in terms of timing and efficiency metrics. Execution times were lower than with the other platforms, by approximately 56% in the case of two threads, and by 29% in the case of 32-thread configurations. The outcomes could be of practical value to companies which intend to use such low-cost embedded devices in the development of reliable real-time industrial applications. Full article
Show Figures

Figure 1

34 pages, 1269 KiB  
Article
Stochastic Modeling for Intelligent Software-Defined Vehicular Networks: A Survey
by Banoth Ravi, Blesson Varghese, Ilir Murturi, Praveen Kumar Donta, Schahram Dustdar, Chinmaya Kumar Dehury and Satish Narayana Srirama
Computers 2023, 12(8), 162; https://doi.org/10.3390/computers12080162 - 12 Aug 2023
Cited by 4 | Viewed by 1910
Abstract
Digital twins and the Internet of Things (IoT) have gained significant research attention in recent years due to their potential advantages in various domains, and vehicular ad hoc networks (VANETs) are one such application. VANETs can provide a wide range of services for [...] Read more.
Digital twins and the Internet of Things (IoT) have gained significant research attention in recent years due to their potential advantages in various domains, and vehicular ad hoc networks (VANETs) are one such application. VANETs can provide a wide range of services for passengers and drivers, including safety, convenience, and information. The dynamic nature of these environments poses several challenges, including intermittent connectivity, quality of service (QoS), and heterogeneous applications. Combining intelligent technologies and software-defined networking (SDN) with VANETs (termed intelligent software-defined vehicular networks (iSDVNs)) meets these challenges. In this context, several types of research have been published, and we summarize their benefits and limitations. We also aim to survey stochastic modeling and performance analysis for iSDVNs and the uses of machine-learning algorithms through digital twin networks (DTNs), which are also part of iSDVNs. We first present a taxonomy of SDVN architectures based on their modes of operation. Next, we survey and classify the state-of-the-art iSDVN routing protocols, stochastic computations, and resource allocations. The evolution of SDN causes its complexity to increase, posing a significant challenge to efficient network management. Digital twins offer a promising solution to address these challenges. This paper explores the relationship between digital twins and SDN and also proposes a novel approach to improve network management in SDN environments by increasing digital twin capabilities. We analyze the pitfalls of these state-of-the-art iSDVN protocols and compare them using tables. Finally, we summarize several challenges faced by current iSDVNs and possible future directions to make iSDVNs autonomous. Full article
Show Figures

Figure 1

16 pages, 1350 KiB  
Article
Face Detection Using a Capsule Network for Driver Monitoring Application
by János Hollósi, Áron Ballagi, Gábor Kovács, Szabolcs Fischer and Viktor Nagy
Computers 2023, 12(8), 161; https://doi.org/10.3390/computers12080161 - 12 Aug 2023
Cited by 1 | Viewed by 1458
Abstract
Bus driver distraction and cognitive load lead to higher accident risk. Driver distraction sources and complex physical and psychological effects must be recognized and analyzed in real-world driving conditions to reduce risk and enhance overall road safety. The implementation of a camera-based system [...] Read more.
Bus driver distraction and cognitive load lead to higher accident risk. Driver distraction sources and complex physical and psychological effects must be recognized and analyzed in real-world driving conditions to reduce risk and enhance overall road safety. The implementation of a camera-based system utilizing computer vision for face recognition emerges as a highly viable and effective driver monitoring approach applicable in public transport. Reliable, accurate, and unnoticeable software solutions need to be developed to reach the appropriate robustness of the system. The reliability of data recording depends mainly on external factors, such as vibration, camera lens contamination, lighting conditions, and other optical performance degradations. The current study introduces Capsule Networks (CapsNets) for image processing and face detection tasks. The authors’ goal is to create a fast and accurate system compared to state-of-the-art Neural Network (NN) algorithms. Based on the seven tests completed, the authors’ solution outperformed the other networks in terms of performance degradation in six out of seven cases. The results show that the applied capsule-based solution performs well, and the degradation in efficiency is noticeably smaller than for the presented convolutional neural networks when adversarial attack methods are used. From an application standpoint, ensuring the security and effectiveness of an image-based driver monitoring system relies heavily on the mitigation of disruptive occurrences, commonly referred to as “image distractions,” which represent attacks on the neural network. Full article
Show Figures

Figure 1

45 pages, 2805 KiB  
Review
Medical Image Encryption: A Comprehensive Review
by Saja Theab Ahmed, Dalal Abdulmohsin Hammood, Raad Farhood Chisab, Ali Al-Naji and Javaan Chahl
Computers 2023, 12(8), 160; https://doi.org/10.3390/computers12080160 - 11 Aug 2023
Cited by 2 | Viewed by 2613
Abstract
In medical information systems, image data can be considered crucial information. As imaging technology and methods for analyzing medical images advance, there will be a greater wealth of data available for study. Hence, protecting those images is essential. Image encryption methods are crucial [...] Read more.
In medical information systems, image data can be considered crucial information. As imaging technology and methods for analyzing medical images advance, there will be a greater wealth of data available for study. Hence, protecting those images is essential. Image encryption methods are crucial in multimedia applications for ensuring the security and authenticity of digital images. Recently, the encryption of medical images has garnered significant attention from academics due to concerns about the safety of medical communication. Advanced approaches, such as e-health, smart health, and telemedicine applications, are employed in the medical profession. This has highlighted the issue that medical images are often produced and shared online, necessitating protection against unauthorized use. Full article
Show Figures

Figure 1

14 pages, 499 KiB  
Article
Genetic Approach to Improve Cryptographic Properties of Balanced Boolean Functions Using Bent Functions
by Erol Özçekiç, Selçuk Kavut and Hakan Kutucu
Computers 2023, 12(8), 159; https://doi.org/10.3390/computers12080159 - 09 Aug 2023
Viewed by 875
Abstract
Recently, balanced Boolean functions with an even number n of variables achieving very good autocorrelation properties have been obtained for 12n26. These functions attain the maximum absolute value in the autocorrelation spectra (without considering the zero point) less [...] Read more.
Recently, balanced Boolean functions with an even number n of variables achieving very good autocorrelation properties have been obtained for 12n26. These functions attain the maximum absolute value in the autocorrelation spectra (without considering the zero point) less than 2n2 and are found by using a heuristic search algorithm that is based on the design method of an infinite class of such functions for a higher number of variables. Here, we consider balanced Boolean functions that are closest to the bent functions in terms of the Hamming distance and perform a genetic algorithm efficiently aiming to optimize their cryptographic properties, which provides better absolute indicator values for all of those values of n for the first time. We also observe that among our results, the functions for 16n26 have nonlinearity greater than 2n12n2. In the process, our search strategy produces balanced Boolean functions with the best-known nonlinearity for 8n16. Full article
(This article belongs to the Special Issue Using New Technologies on Cyber Security Solutions)
Show Figures

Figure 1

19 pages, 2089 KiB  
Article
Downlink Power Allocation for CR-NOMA-Based Femtocell D2D Using Greedy Asynchronous Distributed Interference Avoidance Algorithm
by Nahla Nur Elmadina, Rashid Saeed, Elsadig Saeid, Elmustafa Sayed Ali, Maha Abdelhaq, Raed Alsaqour and Nawaf Alharbe
Computers 2023, 12(8), 158; https://doi.org/10.3390/computers12080158 - 03 Aug 2023
Cited by 2 | Viewed by 1296
Abstract
This paper focuses on downlink power allocation for a cognitive radio-based non-orthogonal multiple access (CR-NOMA) system in a femtocell environment involving device-to-device (D2D) communication. The proposed power allocation scheme employs the greedy asynchronous distributed interference avoidance (GADIA) algorithm. This research aims to optimize [...] Read more.
This paper focuses on downlink power allocation for a cognitive radio-based non-orthogonal multiple access (CR-NOMA) system in a femtocell environment involving device-to-device (D2D) communication. The proposed power allocation scheme employs the greedy asynchronous distributed interference avoidance (GADIA) algorithm. This research aims to optimize the power allocation in the downlink transmission, considering the unique characteristics of the CR-NOMA-based femtocell D2D system. The GADIA algorithm is utilized to mitigate interference and effectively optimize power allocation across the network. This research uses a fairness index to present a novel fairness-constrained power allocation algorithm for a downlink non-orthogonal multiple access (NOMA) system. Through extensive simulations, the maximum rate under fairness (MRF) algorithm is shown to optimize system performance while maintaining fairness among users effectively. The fairness index is demonstrated to be adaptable to various user counts, offering a specified range with excellent responsiveness. The implementation of the GADIA algorithm exhibits promising results for sub-optimal frequency band distribution within the network. Mathematical models evaluated in MATLAB further confirm the superiority of CR-NOMA over optimum power allocation NOMA (OPA) and fixed power allocation NOMA (FPA) techniques. Full article
(This article belongs to the Special Issue Advances in Energy-Efficient Computer and Network Systems)
Show Figures

Figure 1

16 pages, 1407 KiB  
Article
Joining Federated Learning to Blockchain for Digital Forensics in IoT
by Wejdan Almutairi and Tarek Moulahi
Computers 2023, 12(8), 157; https://doi.org/10.3390/computers12080157 - 03 Aug 2023
Cited by 1 | Viewed by 1562
Abstract
In present times, the Internet of Things (IoT) is becoming the new era in technology by including smart devices in every aspect of our lives. Smart devices in IoT environments are increasing and storing large amounts of sensitive data, which attracts a lot [...] Read more.
In present times, the Internet of Things (IoT) is becoming the new era in technology by including smart devices in every aspect of our lives. Smart devices in IoT environments are increasing and storing large amounts of sensitive data, which attracts a lot of cybersecurity threats. With these attacks, digital forensics is needed to conduct investigations to identify when and where the attacks happened and acquire information to identify the persons responsible for the attacks. However, digital forensics in an IoT environment is a challenging area of research due to the multiple locations that contain data, traceability of the collected evidence, ensuring integrity, difficulty accessing data from multiple sources, and transparency in the process of collecting evidence. For this reason, we proposed combining two promising technologies to provide a sufficient solution. We used federated learning to train models locally based on data stored on the IoT devices using a dataset designed to represent attacks on the IoT environment. Afterward, we performed aggregation via blockchain by collecting the parameters from the IoT gateway to make the blockchain lightweight. The results of our framework are promising in terms of consumed gas in the blockchain and an accuracy of over 98% using MLP in the federated learning phase. Full article
(This article belongs to the Special Issue Using New Technologies on Cyber Security Solutions)
Show Figures

Figure 1

14 pages, 1686 KiB  
Article
Is the Privacy Paradox a Domain-Specific Phenomenon
by Ron S. Hirschprung
Computers 2023, 12(8), 156; https://doi.org/10.3390/computers12080156 - 02 Aug 2023
Viewed by 1698
Abstract
The digital era introduces significant challenges for privacy protection, which grow constantly as technology advances. Privacy is a personal trait, and individuals may desire a different level of privacy, which is known as their “privacy concern”. To achieve privacy, the individual has to [...] Read more.
The digital era introduces significant challenges for privacy protection, which grow constantly as technology advances. Privacy is a personal trait, and individuals may desire a different level of privacy, which is known as their “privacy concern”. To achieve privacy, the individual has to act in the digital world, taking steps that define their “privacy behavior”. It has been found that there is a gap between people’s privacy concern and their privacy behavior, a phenomenon that is called the “privacy paradox”. In this research, we investigated if the privacy paradox is domain-specific; in other words, does it vary for an individual when that person moves between different domains, for example, when using e-Health services vs. online social networks? A unique metric was developed to estimate the paradox in a way that enables comparisons, and an empirical study in which (n=437) validated participants acted in eight domains. It was found that the domain does indeed affect the magnitude of the privacy paradox. This finding has a profound significance both for understanding the privacy paradox phenomenon and for the process of developing effective means to protect privacy. Full article
(This article belongs to the Special Issue Using New Technologies on Cyber Security Solutions)
Show Figures

Figure 1

15 pages, 2214 KiB  
Article
A Comprehensive Approach to Image Protection in Digital Environments
by William Villegas-Ch, Joselin García-Ortiz and Jaime Govea
Computers 2023, 12(8), 155; https://doi.org/10.3390/computers12080155 - 02 Aug 2023
Cited by 1 | Viewed by 1041
Abstract
Protecting the integrity of images has become a growing concern due to the ease of manipulation and unauthorized dissemination of visual content. This article presents a comprehensive approach to safeguarding images’ authenticity and reliability through watermarking techniques. The main goal is to develop [...] Read more.
Protecting the integrity of images has become a growing concern due to the ease of manipulation and unauthorized dissemination of visual content. This article presents a comprehensive approach to safeguarding images’ authenticity and reliability through watermarking techniques. The main goal is to develop effective strategies that preserve the visual quality of images and are resistant to various attacks. The work focuses on developing a watermarking algorithm in Python, implemented with embedding in the spatial domain, transformation in the frequency domain, and pixel modification techniques. A thorough evaluation of efficiency, accuracy, and robustness is performed using numerical metrics and visual assessment to validate the embedded watermarks. The results demonstrate the algorithm’s effectiveness in protecting the integrity of the images, although some attacks may cause visible degradation. Likewise, a comparison with related works is made to highlight the relevance and effectiveness of the proposed techniques. It is concluded that watermarks are presented as an additional layer of protection in applications where the authenticity and integrity of the image are essential. In addition, the importance of future research that addresses perspectives for improvement and new applications to strengthen the protection of the goodness of pictures and other digital media is highlighted. Full article
Show Figures

Figure 1

16 pages, 2729 KiB  
Article
Cooperative Vehicles versus Non-Cooperative Traffic Light: Safe and Efficient Passing
by Johan Thunberg, Taqwa Saeed, Galina Sidorenko, Felipe Valle and Alexey Vinel
Computers 2023, 12(8), 154; https://doi.org/10.3390/computers12080154 - 30 Jul 2023
Cited by 1 | Viewed by 1207
Abstract
Connected and automated vehicles (CAVs) will be a key component of future cooperative intelligent transportation systems (C-ITS). Since the adoption of C-ITS is not foreseen to happen instantly, not all of its elements are going to be connected at the early deployment stages. [...] Read more.
Connected and automated vehicles (CAVs) will be a key component of future cooperative intelligent transportation systems (C-ITS). Since the adoption of C-ITS is not foreseen to happen instantly, not all of its elements are going to be connected at the early deployment stages. We consider a scenario where vehicles approaching a traffic light are connected to each other, but the traffic light itself is not cooperative. Information about indented trajectories such as decisions on how and when to accelerate, decelerate and stop, is communicated among the vehicles involved. We provide an optimization-based procedure for efficient and safe passing of traffic lights (or other temporary road blockage) using vehicle-to-vehicle communication (V2V). We locally optimize objectives that promote efficiency such as less deceleration and larger minimum velocity, while maintaining safety in terms of no collisions. The procedure is computationally efficient as it mainly involves a gradient decent algorithm for one single parameter. Full article
(This article belongs to the Special Issue Cooperative Vehicular Networking 2023)
Show Figures

Figure 1

13 pages, 2576 KiB  
Review
Impact of the Implementation of ChatGPT in Education: A Systematic Review
by Marta Montenegro-Rueda, José Fernández-Cerero, José María Fernández-Batanero and Eloy López-Meneses
Computers 2023, 12(8), 153; https://doi.org/10.3390/computers12080153 - 29 Jul 2023
Cited by 26 | Viewed by 73392
Abstract
The aim of this study is to present, based on a systematic review of the literature, an analysis of the impact of the application of the ChatGPT tool in education. The data were obtained by reviewing the results of studies published since the [...] Read more.
The aim of this study is to present, based on a systematic review of the literature, an analysis of the impact of the application of the ChatGPT tool in education. The data were obtained by reviewing the results of studies published since the launch of this application (November 2022) in three leading scientific databases in the world of education (Web of Science, Scopus and Google Scholar). The sample consisted of 12 studies. Using a descriptive and quantitative methodology, the most significant data are presented. The results show that the implementation of ChatGPT in the educational environment has a positive impact on the teaching–learning process, however, the results also highlight the importance of teachers being trained to use the tool properly. Although ChatGPT can enhance the educational experience, its successful implementation requires teachers to be familiar with its operation. These findings provide a solid basis for future research and decision-making regarding the use of ChatGPT in the educational context. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning)
Show Figures

Figure 1

13 pages, 2509 KiB  
Article
Automated Diagnosis of Prostate Cancer Using mpMRI Images: A Deep Learning Approach for Clinical Decision Support
by Anil B. Gavade, Rajendra Nerli, Neel Kanwal, Priyanka A. Gavade, Shridhar Sunilkumar Pol and Syed Tahir Hussain Rizvi
Computers 2023, 12(8), 152; https://doi.org/10.3390/computers12080152 - 28 Jul 2023
Cited by 3 | Viewed by 1425
Abstract
Prostate cancer (PCa) is a significant health concern for men worldwide, where early detection and effective diagnosis can be crucial for successful treatment. Multiparametric magnetic resonance imaging (mpMRI) has evolved into a significant imaging modality in this regard, which provides detailed images of [...] Read more.
Prostate cancer (PCa) is a significant health concern for men worldwide, where early detection and effective diagnosis can be crucial for successful treatment. Multiparametric magnetic resonance imaging (mpMRI) has evolved into a significant imaging modality in this regard, which provides detailed images of the anatomy and tissue characteristics of the prostate gland. However, interpreting mpMRI images can be challenging for humans due to the wide range of appearances and features of PCa, which can be subtle and difficult to distinguish from normal prostate tissue. Deep learning (DL) approaches can be beneficial in this regard by automatically differentiating relevant features and providing an automated diagnosis of PCa. DL models can assist the existing clinical decision support system by saving a physician’s time in localizing regions of interest (ROIs) and help in providing better patient care. In this paper, contemporary DL models are used to create a pipeline for the segmentation and classification of mpMRI images. Our DL approach follows two steps: a U-Net architecture for segmenting ROI in the first stage and a long short-term memory (LSTM) network for classifying the ROI as either cancerous or non-cancerous. We trained our DL models on the I2CVB (Initiative for Collaborative Computer Vision Benchmarking) dataset and conducted a thorough comparison with our experimental setup. Our proposed DL approach, with simpler architectures and training strategy using a single dataset, outperforms existing techniques in the literature. Results demonstrate that the proposed approach can detect PCa disease with high precision and also has a high potential to improve clinical assessment. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

41 pages, 1074 KiB  
Article
Convolutional Neural Networks: A Survey
by Moez Krichen
Computers 2023, 12(8), 151; https://doi.org/10.3390/computers12080151 - 28 Jul 2023
Cited by 71 | Viewed by 8752
Abstract
Artificial intelligence (AI) has become a cornerstone of modern technology, revolutionizing industries from healthcare to finance. Convolutional neural networks (CNNs) are a subset of AI that have emerged as a powerful tool for various tasks including image recognition, speech recognition, natural language processing [...] Read more.
Artificial intelligence (AI) has become a cornerstone of modern technology, revolutionizing industries from healthcare to finance. Convolutional neural networks (CNNs) are a subset of AI that have emerged as a powerful tool for various tasks including image recognition, speech recognition, natural language processing (NLP), and even in the field of genomics, where they have been utilized to classify DNA sequences. This paper provides a comprehensive overview of CNNs and their applications in image recognition tasks. It first introduces the fundamentals of CNNs, including the layers of CNNs, convolution operation (Conv_Op), Feat_Maps, activation functions (Activ_Func), and training methods. It then discusses several popular CNN architectures such as LeNet, AlexNet, VGG, ResNet, and InceptionNet, and compares their performance. It also examines when to use CNNs, their advantages and limitations, and provides recommendations for developers and data scientists, including preprocessing the data, choosing appropriate hyperparameters (Hyper_Param), and evaluating model performance. It further explores the existing platforms and libraries for CNNs such as TensorFlow, Keras, PyTorch, Caffe, and MXNet, and compares their features and functionalities. Moreover, it estimates the cost of using CNNs and discusses potential cost-saving strategies. Finally, it reviews recent developments in CNNs, including attention mechanisms, capsule networks, transfer learning, adversarial training, quantization and compression, and enhancing the reliability and efficiency of CNNs through formal methods. The paper is concluded by summarizing the key takeaways and discussing the future directions of CNN research and development. Full article
Show Figures

Figure 1

12 pages, 1044 KiB  
Article
The Generation of Articulatory Animations Based on Keypoint Detection and Motion Transfer Combined with Image Style Transfer
by Xufeng Ling, Yu Zhu, Wei Liu, Jingxin Liang and Jie Yang
Computers 2023, 12(8), 150; https://doi.org/10.3390/computers12080150 - 28 Jul 2023
Viewed by 1067
Abstract
Knowing the correct positioning of the tongue and mouth for pronunciation is crucial for learning English pronunciation correctly. Articulatory animation is an effective way to address the above task and helpful to English learners. However, articulatory animations are all traditionally hand-drawn. Different situations [...] Read more.
Knowing the correct positioning of the tongue and mouth for pronunciation is crucial for learning English pronunciation correctly. Articulatory animation is an effective way to address the above task and helpful to English learners. However, articulatory animations are all traditionally hand-drawn. Different situations require varying animation styles, so a comprehensive redraw of all the articulatory animations is necessary. To address this issue, we developed a method for the automatic generation of articulatory animations using a deep learning system. Our method leverages an automatic keypoint-based detection network, a motion transfer network, and a style transfer network to generate a series of articulatory animations that adhere to the desired style. By inputting a target-style articulation image, our system is capable of producing animations with the desired characteristics. We created a dataset of articulation images and animations from public sources, including the International Phonetic Association (IPA), to establish our articulation image animation dataset. We performed preprocessing on the articulation images by segmenting them into distinct areas each corresponding to a specific articulatory part, such as the tongue, upper jaw, lower jaw, soft palate, and vocal cords. We trained a deep neural network model capable of automatically detecting the keypoints in typical articulation images. Also, we trained a generative adversarial network (GAN) model that can generate end-to-end animation of different styles automatically from the characteristics of keypoints and the learned image style. To train a relatively robust model, we used four different style videos: one magnetic resonance imaging (MRI) articulatory video and three hand-drawn videos. For further applications, we combined the consonant and vowel animations together to generate a syllable animation and the animation of a word consisting of many syllables. Experiments show that this system can auto-generate articulatory animations according to input phonetic symbols and should be helpful to people for English articulation correction. Full article
(This article belongs to the Topic Selected Papers from ICCAI 2023 and IMIP 2023)
Show Figures

Figure 1

19 pages, 17198 KiB  
Article
The Impact of the Web Data Access Object (WebDAO) Design Pattern on Productivity
by Zoltán Richárd Jánki and Vilmos Bilicki
Computers 2023, 12(8), 149; https://doi.org/10.3390/computers12080149 - 27 Jul 2023
Viewed by 1309
Abstract
In contemporary software development, it is crucial to adhere to design patterns because well-organized and readily maintainable source code facilitates bug fixes and the development of new features. A carefully selected set of design patterns can have a significant impact on the productivity [...] Read more.
In contemporary software development, it is crucial to adhere to design patterns because well-organized and readily maintainable source code facilitates bug fixes and the development of new features. A carefully selected set of design patterns can have a significant impact on the productivity of software development. Data Access Object (DAO) is a frequently used design pattern that provides an abstraction layer between the application and the database and is present in the back-end. As serverless development arises, more and more applications are using the DAO design pattern, but it has been moved to the front-end. We refer to this pattern as WebDAO. It is evident that the DAO pattern improves development productivity, but it has never been demonstrated for WebDAO. Here, we evaluated the open source Angular projects to determine whether they use WebDAO. For automatic evaluation, we trained a Natural Language Processing (NLP) model that can recognize the WebDAO design pattern with 92% accuracy. On the basis of the results, we analyzed the entire history of the projects and presented how the WebDAO design pattern impacts productivity, taking into account the number of commits, changes, and issues. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

27 pages, 2495 KiB  
Article
Toward Improved Machine Learning-Based Intrusion Detection for Internet of Things Traffic
by Sarah Alkadi, Saad Al-Ahmadi and Mohamed Maher Ben Ismail
Computers 2023, 12(8), 148; https://doi.org/10.3390/computers12080148 - 27 Jul 2023
Cited by 3 | Viewed by 1621
Abstract
The rapid development of Internet of Things (IoT) networks has revealed multiple security issues. On the other hand, machine learning (ML) has proven its efficiency in building intrusion detection systems (IDSs) intended to reinforce the security of IoT networks. In fact, the successful [...] Read more.
The rapid development of Internet of Things (IoT) networks has revealed multiple security issues. On the other hand, machine learning (ML) has proven its efficiency in building intrusion detection systems (IDSs) intended to reinforce the security of IoT networks. In fact, the successful design and implementation of such techniques require the use of effective methods in terms of data and model quality. This paper encloses an empirical impact analysis for the latter in the context of a multi-class classification scenario. A series of experiments were conducted using six ML models, along with four benchmarking datasets, including UNSW-NB15, BOT-IoT, ToN-IoT, and Edge-IIoT. The proposed framework investigates the marginal benefit of employing data pre-processing and model configurations considering IoT limitations. In fact, the empirical findings indicate that the accuracy of ML-based IDS detection rapidly increases when methods that use quality data and models are deployed. Specifically, data cleaning, transformation, normalization, and dimensionality reduction, along with model parameter tuning, exhibit significant potential to minimize computational complexity and yield better performance. In addition, MLP- and clustering-based algorithms outperformed the remaining models, and the obtained accuracy reached up to 99.97%. One should note that the performance of the challenger models was assessed using similar test sets, and this was compared to the results achieved using the relevant pieces of research. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

10 pages, 259 KiB  
Article
Feature Selection with Weighted Ensemble Ranking for Improved Classification Performance on the CSE-CIC-IDS2018 Dataset
by László Göcs and Zsolt Csaba Johanyák
Computers 2023, 12(8), 147; https://doi.org/10.3390/computers12080147 - 25 Jul 2023
Viewed by 1166
Abstract
Feature selection is a crucial step in machine learning, aiming to identify the most relevant features in high-dimensional data in order to reduce the computational complexity of model development and improve generalization performance. Ensemble feature-ranking methods combine the results of several feature-selection techniques [...] Read more.
Feature selection is a crucial step in machine learning, aiming to identify the most relevant features in high-dimensional data in order to reduce the computational complexity of model development and improve generalization performance. Ensemble feature-ranking methods combine the results of several feature-selection techniques to identify a subset of the most relevant features for a given task. In many cases, they produce a more comprehensive ranking of features than the individual methods used alone. This paper presents a novel approach to ensemble feature ranking, which uses a weighted average of the individual ranking scores calculated using these individual methods. The optimal weights are determined using a Taguchi-type design of experiments. The proposed methodology significantly improves classification performance on the CSE-CIC-IDS2018 dataset, particularly for attack types where traditional average-based feature-ranking score combinations result in low classification metrics. Full article
(This article belongs to the Special Issue Advances in Database Engineered Applications 2023)
Show Figures

Figure 1

Previous Issue
Back to TopTop