Big Data Analytics Using Artificial Intelligence

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 75920

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Data Science Institute, University of Technology Sydney, Ultimo, NSW 2007, Australia
Interests: machine learning; pattern recognition; human–machine interaction; behavior analytics; cognitive modelling
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Information Technology, Al Al-Bayt University, Mafraq, Jordan
Interests: arithmetic optimization algorithm (AOA); bio-inspired computing; nature-inspired computing; swarm intelligence; artificial intelligence; meta-heuristic modeling; optimization algorithms; evolutionary computations; information retrieval; text clustering; feature selection; combinatorial problems; optimization; advanced machine learning; big data; natural language processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Big data analytics is one high-priority focus of data science, and there is no doubt that big data are now quickly growing in all science and engineering fields. Big data analytics is the process of examining and analyzing massive and varied data that can help organizations make more informed business decisions, especially for uncovered hidden patterns, unknown correlations, market trends, customer preferences, and other useful information. Big data have become essential, as numerous organizations deal with massive amounts of specific information, which can contain useful information about problems such as national intelligence, cybersecurity, biology, fraud detection, marketing, astronomy, and medical informatics. Several promising artificial intelligence techniques can be used for big data analytics, including representation learning, optimization methods, heuristics, machine learning, deep learning, artificial neural networks, the Markov decision process, support vector machines, natural language processing, machine vision, data mining, distributed and parallel learning, transfer learning, active learning, and kernel-based learning. In addition, big data analytics demands new and sophisticated algorithms based on Artificial Intelligence techniques to treat data in real time with high accuracy and productivity, such as in association rule learning, classification tree analysis, genetic algorithms, machine learning, regression analysis, forecasting analysis, sentiment analysis, and social network analysis. Research using the common big data tools is interesting; Xplenty, Adverity, Apache Hadoop, CDH (Cloudera Distribution for Hadoop), Cassandra, Knime, Datawrapper, MongoDB, Lumify, HPCC, Storm, Apache SAMOA, Talend, Rapidminer, Qubole, Tableau, and R. The goal of this Special Issue is to discuss several critical issues related to learning from massive amounts of data and highlight current research endeavors and the challenges to big data, as well as shared recent advances in this research area. We solicit new contributions that have a strong emphasis on Artificial Intelligence for Big Data Analytics.

Prof. Dr. Amir H. Gandomi
Prof. Dr. Fang Chen
Dr. Laith Abualigah
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Big data analytics
  • Data science
  • Artificial intelligence
  • Machine learning
  • Intelligent decisions
  • Knowledge discovery
  • Deep learning
  • Clustering
  • Evolutionary computation
  • Association rule learning
  • Classification tree analysis
  • Genetic algorithms
  • Regression analysis
  • Forecasting analysis
  • Sentiment analysis
  • Social network analysis
  • Statistical description
  • Apache Hadoop
  • Benchmarks for big data analysis
  • Analysis of real-time data
  • Real-world applications of Artificial Intelligence in Big data

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

5 pages, 198 KiB  
Editorial
Big Data Analytics Using Artificial Intelligence
by Amir H. Gandomi, Fang Chen and Laith Abualigah
Electronics 2023, 12(4), 957; https://doi.org/10.3390/electronics12040957 - 15 Feb 2023
Cited by 3 | Viewed by 5467
Abstract
Data analytics using artificial intelligence is the process of leveraging advanced AI techniques to extract insights and knowledge from large and complex datasets [...] Full article
(This article belongs to the Special Issue Big Data Analytics Using Artificial Intelligence)

Research

Jump to: Editorial, Review

13 pages, 463 KiB  
Article
A Robust Chronic Kidney Disease Classifier Using Machine Learning
by Debabrata Swain, Utsav Mehta, Ayush Bhatt, Hardeep Patel, Kevin Patel, Devanshu Mehta, Biswaranjan Acharya, Vassilis C. Gerogiannis, Andreas Kanavos and Stella Manika
Electronics 2023, 12(1), 212; https://doi.org/10.3390/electronics12010212 - 1 Jan 2023
Cited by 28 | Viewed by 3167
Abstract
Clinical support systems are affected by the issue of high variance in terms of chronic disorder prognosis. This uncertainty is one of the principal causes for the demise of large populations around the world suffering from some fatal diseases such as chronic kidney [...] Read more.
Clinical support systems are affected by the issue of high variance in terms of chronic disorder prognosis. This uncertainty is one of the principal causes for the demise of large populations around the world suffering from some fatal diseases such as chronic kidney disease (CKD). Due to this reason, the diagnosis of this disease is of great concern for healthcare systems. In such a case, machine learning can be used as an effective tool to reduce the randomness in clinical decision making. Conventional methods for the detection of chronic kidney disease are not always accurate because of their high degree of dependency on several sets of biological attributes. Machine learning is the process of training a machine using a vast collection of historical data for the purpose of intelligent classification. This work aims at developing a machine-learning model that can use a publicly available data to forecast the occurrence of chronic kidney disease. A set of data preprocessing steps were performed on this dataset in order to construct a generic model. This set of steps includes the appropriate imputation of missing data points, along with the balancing of data using the SMOTE algorithm and the scaling of the features. A statistical technique, namely, the chi-squared test, is used for the extraction of the least-required set of adequate and highly correlated features to the output. For the model training, a stack of supervised-learning techniques is used for the development of a robust machine-learning model. Out of all the applied learning techniques, support vector machine (SVM) and random forest (RF) achieved the lowest false-negative rates and test accuracy, equal to 99.33% and 98.67%, respectively. However, SVM achieved better results than RF did when validated with 10-fold cross-validation. Full article
(This article belongs to the Special Issue Big Data Analytics Using Artificial Intelligence)
Show Figures

Figure 1

22 pages, 11613 KiB  
Article
Design Research Insights on Text Mining Analysis: Establishing the Most Used and Trends in Keywords of Design Research Journals
by Muneer Nusir, Ali Louati, Hassen Louati, Usman Tariq, Raed Abu Zitar, Laith Abualigah and Amir H. Gandomi
Electronics 2022, 11(23), 3930; https://doi.org/10.3390/electronics11233930 - 28 Nov 2022
Cited by 2 | Viewed by 2350
Abstract
Design research topics attract exponentially more attention and consideration among researchers. This study is the first research article that endeavors to analyze selected design research publications using an advanced approach called “text mining”. This approach speculates its results depending on the existence of [...] Read more.
Design research topics attract exponentially more attention and consideration among researchers. This study is the first research article that endeavors to analyze selected design research publications using an advanced approach called “text mining”. This approach speculates its results depending on the existence of a research term (i.e., keywords), which can be more robust than other methods/approaches that rely on contextual data or authors’ perspectives. The main aim of this research paper is to expand knowledge and familiarity with design research and explore future research directions by addressing the gaps in the literature; relying on the literature review, it can be stated that the research area in the design domain still not built-up a theory, which can unify the field. In general, text mining with these features allows increased validity and generalization as compared to other approaches in the literature. We used a text mining technique to collect data and analyzed 3553 articles collected in 10 journals using 17,487 keywords. New topics were investigated in the domain of design concepts, which included attracting researchers, practitioners, and journal editorial boards. Such issues as co-innovation, ethical design, social practice design, conceptual thinking, collaborative design, creativity, and generative methods and tools were subject to additional research. On the other hand, researchers pursued topics such as collaborative design, human-centered design, interdisciplinary design, design education, participatory design, design practice, collaborative design, design development, collaboration, design theories, design administration, and service/product design areas. The key categories investigated and reported in this paper helped in determining what fields are flourishing and what fields are eroding. Full article
(This article belongs to the Special Issue Big Data Analytics Using Artificial Intelligence)
Show Figures

Figure 1

24 pages, 37206 KiB  
Article
A Novel Deep Learning Technique for Detecting Emotional Impact in Online Education
by Shadi AlZu’bi, Raed Abu Zitar, Bilal Hawashin, Samia Abu Shanab, Amjed Zraiqat, Ala Mughaid, Khaled H. Almotairi and Laith Abualigah
Electronics 2022, 11(18), 2964; https://doi.org/10.3390/electronics11182964 - 19 Sep 2022
Cited by 17 | Viewed by 3653
Abstract
Emotional intelligence is the automatic detection of human emotions using various intelligent methods. Several studies have been conducted on emotional intelligence, and only a few have been adopted in education. Detecting student emotions can significantly increase productivity and improve the education process. This [...] Read more.
Emotional intelligence is the automatic detection of human emotions using various intelligent methods. Several studies have been conducted on emotional intelligence, and only a few have been adopted in education. Detecting student emotions can significantly increase productivity and improve the education process. This paper proposes a new deep learning method to detect student emotions. The main aim of this paper is to map the relationship between teaching practices and student learning based on emotional impact. Facial recognition algorithms extract helpful information from online platforms as image classification techniques are applied to detect the emotions of student and/or teacher faces. As part of this work, two deep learning models are compared according to their performance. Promising results are achieved using both techniques, as presented in the Experimental Results Section. For validation of the proposed system, an online course with students is used; the findings suggest that this technique operates well. Based on emotional analysis, several deep learning techniques are applied to train and test the emotion classification process. Transfer learning for a pre-trained deep neural network is used as well to increase the accuracy of the emotion classification stage. The obtained results show that the performance of the proposed method is promising using both techniques, as presented in the Experimental Results Section. Full article
(This article belongs to the Special Issue Big Data Analytics Using Artificial Intelligence)
Show Figures

Figure 1

25 pages, 649 KiB  
Article
A Survey of Trajectory Planning Techniques for Autonomous Systems
by Imran Mir, Faiza Gul, Suleman Mir, Mansoor Ahmed Khan, Nasir Saeed, Laith Abualigah, Belal Abuhaija and Amir H. Gandomi
Electronics 2022, 11(18), 2801; https://doi.org/10.3390/electronics11182801 - 6 Sep 2022
Cited by 24 | Viewed by 6108
Abstract
This work offers an overview of the effective communication techniques for space exploration of ground, aerial, and underwater vehicles. We not only comprehensively summarize the trajectory planning, space exploration, optimization, and other challenges encountered but also present the possible directions for future work. [...] Read more.
This work offers an overview of the effective communication techniques for space exploration of ground, aerial, and underwater vehicles. We not only comprehensively summarize the trajectory planning, space exploration, optimization, and other challenges encountered but also present the possible directions for future work. Because a detailed study like this is uncommon in the literature, an attempt has been made to fill the gap for readers interested in path planning. This paper also includes optimization strategies that can be used to implement terrestrial, underwater, and airborne applications. This study addresses numerical, bio-inspired, and hybrid methodologies for each dimension described. Throughout this study, we endeavored to establish a centralized platform in which a wealth of research on autonomous vehicles (on the land and their trajectory optimizations), airborne vehicles, and underwater vehicles, is published. Full article
(This article belongs to the Special Issue Big Data Analytics Using Artificial Intelligence)
Show Figures

Figure 1

21 pages, 3397 KiB  
Article
Real-Time Facemask Detection for Preventing COVID-19 Spread Using Transfer Learning Based Deep Neural Network
by Mona A. S. Ai, Anitha Shanmugam, Suresh Muthusamy, Chandrasekaran Viswanathan, Hitesh Panchal, Mahendran Krishnamoorthy, Diaa Salama Abd Elminaam and Rasha Orban
Electronics 2022, 11(14), 2250; https://doi.org/10.3390/electronics11142250 - 18 Jul 2022
Cited by 12 | Viewed by 3175
Abstract
The COVID-19 pandemic disrupted people’s livelihoods and hindered global trade and transportation. During the COVID-19 pandemic, the World Health Organization mandated that masks be worn to protect against this deadly virus. Protecting one’s face with a mask has become the standard. Many public [...] Read more.
The COVID-19 pandemic disrupted people’s livelihoods and hindered global trade and transportation. During the COVID-19 pandemic, the World Health Organization mandated that masks be worn to protect against this deadly virus. Protecting one’s face with a mask has become the standard. Many public service providers will encourage clients to wear masks properly in the foreseeable future. On the other hand, monitoring the individuals while standing alone in one location is exhausting. This paper offers a solution based on deep learning for identifying masks worn over faces in public places to minimize the coronavirus community transmission. The main contribution of the proposed work is the development of a real-time system for determining whether the person on a webcam is wearing a mask or not. The ensemble method makes it easier to achieve high accuracy and makes considerable strides toward enhancing detection speed. In addition, the implementation of transfer learning on pretrained models and stringent testing on an objective dataset led to the development of a highly dependable and inexpensive solution. The findings provide validity to the application’s potential for use in real-world settings, contributing to the reduction in pandemic transmission. Compared to the existing methodologies, the proposed method delivers improved accuracy, specificity, precision, recall, and F-measure performance in three-class outputs. These metrics include accuracy, specificity, precision, and recall. An appropriate balance is kept between the number of necessary parameters and the time needed to conclude the various models. Full article
(This article belongs to the Special Issue Big Data Analytics Using Artificial Intelligence)
Show Figures

Figure 1

22 pages, 2170 KiB  
Article
Scientometric Analysis and Classification of Research Using Convolutional Neural Networks: A Case Study in Data Science and Analytics
by Mohammad Daradkeh, Laith Abualigah, Shadi Atalla and Wathiq Mansoor
Electronics 2022, 11(13), 2066; https://doi.org/10.3390/electronics11132066 - 30 Jun 2022
Cited by 28 | Viewed by 4153
Abstract
With the increasing development of published literature, classification methods based on bibliometric information and traditional machine learning approaches encounter performance challenges related to overly coarse classifications and low accuracy. This study presents a deep learning approach for scientometric analysis and classification of scientific [...] Read more.
With the increasing development of published literature, classification methods based on bibliometric information and traditional machine learning approaches encounter performance challenges related to overly coarse classifications and low accuracy. This study presents a deep learning approach for scientometric analysis and classification of scientific literature based on convolutional neural networks (CNN). Three dimensions, namely publication features, author features, and content features, were divided into explicit and implicit features to form a set of scientometric terms through explicit feature extraction and implicit feature mapping. The weighted scientometric term vectors are fitted into a CNN model to achieve dual-label classification of literature based on research content and methods. The effectiveness of the proposed model is demonstrated using an application example from the data science and analytics literature. The empirical results show that the scientometric classification model proposed in this study performs better than comparable machine learning classification methods in terms of precision, recognition, and F1-score. It also exhibits higher accuracy than deep learning classification based solely on explicit and dominant features. This study provides a methodological guide for fine-grained classification of scientific literature and a thorough investigation of its practice. Full article
(This article belongs to the Special Issue Big Data Analytics Using Artificial Intelligence)
Show Figures

Figure 1

20 pages, 1857 KiB  
Article
A Novel Method for the Classification of Butterfly Species Using Pre-Trained CNN Models
by Fathimathul Rajeena P. P., Rasha Orban, Kogilavani Shanmuga Vadivel, Malliga Subramanian, Suresh Muthusamy, Diaa Salam Abd Elminaam, Ayman Nabil, Laith Abulaigh, Mohsen Ahmadi and Mona A. S. Ali
Electronics 2022, 11(13), 2016; https://doi.org/10.3390/electronics11132016 - 27 Jun 2022
Cited by 16 | Viewed by 5077
Abstract
In comparison to the competitors, engineers must provide quick, low-cost, and dependable solutions. The advancement of intelligence generated by machines and its application in almost every field has created a need to reduce the human role in image processing while also making time [...] Read more.
In comparison to the competitors, engineers must provide quick, low-cost, and dependable solutions. The advancement of intelligence generated by machines and its application in almost every field has created a need to reduce the human role in image processing while also making time and labor profit. Lepidopterology is the discipline of entomology dedicated to the scientific analysis of caterpillars and the three butterfly superfamilies. Students studying lepidopterology must generally capture butterflies with nets and dissect them to discover the insect’s family types and shape. This research work aims to assist science students in correctly recognizing butterflies without harming the insects during their analysis. This paper discusses transfer-learning-based neural network models to identify butterfly species. The datasets are collected from the Kaggle website, which contains 10,035 images of 75 different species of butterflies. From the available dataset, 15 unusual species were selected, including various butterfly orientations, photography angles, butterfly lengths, occlusion, and backdrop complexity. When we analyzed the dataset, we found an imbalanced class distribution among the 15 identified classes, leading to overfitting. The proposed system performs data augmentation to prevent data scarcity and reduce overfitting. The augmented dataset is also used to improve the accuracy of the data models. This research work utilizes transfer learning based on various convolutional neural network architectures such as VGG16, VGG19, MobileNet, Xception, ResNet50, and InceptionV3 to classify the butterfly species into various categories. All the proposed models are evaluated using precision, recall, F-Measure, and accuracy. The investigation findings reveal that the InceptionV3 architecture provides an accuracy of 94.66%, superior to all other architectures. Full article
(This article belongs to the Special Issue Big Data Analytics Using Artificial Intelligence)
Show Figures

Figure 1

18 pages, 3751 KiB  
Article
Classification of Glaucoma Based on Elephant-Herding Optimization Algorithm and Deep Belief Network
by Mona A. S. Ali, Kishore Balasubramanian, Gayathri Devi Krishnamoorthy, Suresh Muthusamy, Santhiya Pandiyan, Hitesh Panchal, Suman Mann, Kokilavani Thangaraj, Noha E. El-Attar, Laith Abualigah and Diaa Salama Abd Elminaam
Electronics 2022, 11(11), 1763; https://doi.org/10.3390/electronics11111763 - 2 Jun 2022
Cited by 15 | Viewed by 2926
Abstract
This study proposes a novel glaucoma identification system from fundus images through the deep belief network (DBN) optimized by the elephant-herding optimization (EHO) algorithm. Initially, the input image undergoes the preprocessing steps of noise removal and enhancement processes, followed by optical disc (OD) [...] Read more.
This study proposes a novel glaucoma identification system from fundus images through the deep belief network (DBN) optimized by the elephant-herding optimization (EHO) algorithm. Initially, the input image undergoes the preprocessing steps of noise removal and enhancement processes, followed by optical disc (OD) and optical cup (OC) segmentation and extraction of structural, intensity, and textural features. Most discriminative features are then selected using the ReliefF algorithm and passed to the DBN for classification into glaucomatous or normal. To enhance the classification rate of the DBN, the DBN parameters are fine-tuned by the EHO algorithm. The model has experimented on public and private datasets with 7280 images, which attained a maximum classification rate of 99.4%, 100% specificity, and 99.89% sensitivity. The 10-fold cross validation reduced the misclassification and attained 98.5% accuracy. Investigations proved the efficacy of the proposed method in avoiding bias, dataset variability, and reducing false positives compared to similar works of glaucoma classification. The proposed system can be tested on diverse datasets, aiding in the improved glaucoma diagnosis. Full article
(This article belongs to the Special Issue Big Data Analytics Using Artificial Intelligence)
Show Figures

Figure 1

16 pages, 732 KiB  
Article
Enhanced Credit Card Fraud Detection Model Using Machine Learning
by Noor Saleh Alfaiz and Suliman Mohamed Fati
Electronics 2022, 11(4), 662; https://doi.org/10.3390/electronics11040662 - 21 Feb 2022
Cited by 55 | Viewed by 11887
Abstract
The COVID-19 pandemic has limited people’s mobility to a certain extent, making it difficult to purchase goods and services offline, which has led the creation of a culture of increased dependence on online services. One of the crucial issues with using credit cards [...] Read more.
The COVID-19 pandemic has limited people’s mobility to a certain extent, making it difficult to purchase goods and services offline, which has led the creation of a culture of increased dependence on online services. One of the crucial issues with using credit cards is fraud, which is a serious challenge in the realm of online transactions. Consequently, there is a huge need to develop the best approach possible to using machine learning in order to prevent almost all fraudulent credit card transactions. This paper studies a total of 66 machine learning models based on two stages of evaluation. A real-world credit card fraud detection dataset of European cardholders is used in each model along with stratified K-fold cross-validation. In the first stage, nine machine learning algorithms are tested to detect fraudulent transactions. The best three algorithms are nominated to be used again in the second stage, with 19 resampling techniques used with each one of the best three algorithms. Out of 330 evaluation metric values that took nearly one month to obtain, the All K-Nearest Neighbors (AllKNN) undersampling technique along with CatBoost (AllKNN-CatBoost) is considered to be the best proposed model. Accordingly, the AllKNN-CatBoost model is compared with related works. The results indicate that the proposed model outperforms previous models with an AUC value of 97.94%, a Recall value of 95.91%, and an F1-Score value of 87.40%. Full article
(This article belongs to the Special Issue Big Data Analytics Using Artificial Intelligence)
Show Figures

Figure 1

20 pages, 14650 KiB  
Article
A Hybrid Data Analytics Framework with Sentiment Convergence and Multi-Feature Fusion for Stock Trend Prediction
by Mohammad Kamel Daradkeh
Electronics 2022, 11(2), 250; https://doi.org/10.3390/electronics11020250 - 13 Jan 2022
Cited by 16 | Viewed by 3075
Abstract
Stock market analysis plays an indispensable role in gaining knowledge about the stock market, developing trading strategies, and determining the intrinsic value of stocks. Nevertheless, predicting stock trends remains extremely difficult due to a variety of influencing factors, volatile market news, and sentiments. [...] Read more.
Stock market analysis plays an indispensable role in gaining knowledge about the stock market, developing trading strategies, and determining the intrinsic value of stocks. Nevertheless, predicting stock trends remains extremely difficult due to a variety of influencing factors, volatile market news, and sentiments. In this study, we present a hybrid data analytics framework that integrates convolutional neural networks and bidirectional long short-term memory (CNN-BiLSTM) to evaluate the impact of convergence of news events and sentiment trends with quantitative financial data on predicting stock trends. We evaluated the proposed framework using two case studies from the real estate and communications sectors based on data collected from the Dubai Financial Market (DFM) between 1 January 2020 and 1 December 2021. The results show that combining news events and sentiment trends with quantitative financial data improves the accuracy of predicting stock trends. Compared to benchmarked machine learning models, CNN-BiLSTM offers an improvement of 11.6% in real estate and 25.6% in communications when news events and sentiment trends are combined. This study provides several theoretical and practical implications for further research on contextual factors that influence the prediction and analysis of stock trends. Full article
(This article belongs to the Special Issue Big Data Analytics Using Artificial Intelligence)
Show Figures

Figure 1

19 pages, 2140 KiB  
Article
A Hybrid Imputation Method for Multi-Pattern Missing Data: A Case Study on Type II Diabetes Diagnosis
by Mohammad H. Nadimi-Shahraki, Saeed Mohammadi, Hoda Zamani, Mostafa Gandomi and Amir H. Gandomi
Electronics 2021, 10(24), 3167; https://doi.org/10.3390/electronics10243167 - 19 Dec 2021
Cited by 11 | Viewed by 4083
Abstract
Real medical datasets usually consist of missing data with different patterns which decrease the performance of classifiers used in intelligent healthcare and disease diagnosis systems. Many methods have been proposed to impute missing data, however, they do not fulfill the need for data [...] Read more.
Real medical datasets usually consist of missing data with different patterns which decrease the performance of classifiers used in intelligent healthcare and disease diagnosis systems. Many methods have been proposed to impute missing data, however, they do not fulfill the need for data quality especially in real datasets with different missing data patterns. In this paper, a four-layer model is introduced, and then a hybrid imputation (HIMP) method using this model is proposed to impute multi-pattern missing data including non-random, random, and completely random patterns. In HIMP, first, non-random missing data patterns are imputed, and then the obtained dataset is decomposed into two datasets containing random and completely random missing data patterns. Then, concerning the missing data patterns in each dataset, different single or multiple imputation methods are used. Finally, the best-imputed datasets gained from random and completely random patterns are merged to form the final dataset. The experimental evaluation was conducted by a real dataset named IRDia including all three missing data patterns. The proposed method and comparative methods were compared using different classifiers in terms of accuracy, precision, recall, and F1-score. The classifiers’ performances show that the HIMP can impute multi-pattern missing values more effectively than other comparative methods. Full article
(This article belongs to the Special Issue Big Data Analytics Using Artificial Intelligence)
Show Figures

Figure 1

23 pages, 1946 KiB  
Article
EWOA-OPF: Effective Whale Optimization Algorithm to Solve Optimal Power Flow Problem
by Mohammad H. Nadimi-Shahraki, Shokooh Taghian, Seyedali Mirjalili, Laith Abualigah, Mohamed Abd Elaziz and Diego Oliva
Electronics 2021, 10(23), 2975; https://doi.org/10.3390/electronics10232975 - 29 Nov 2021
Cited by 60 | Viewed by 3619
Abstract
The optimal power flow (OPF) is a vital tool for optimizing the control parameters of a power system by considering the desired objective functions subject to system constraints. Metaheuristic algorithms have been proven to be well-suited for solving complex optimization problems. The whale [...] Read more.
The optimal power flow (OPF) is a vital tool for optimizing the control parameters of a power system by considering the desired objective functions subject to system constraints. Metaheuristic algorithms have been proven to be well-suited for solving complex optimization problems. The whale optimization algorithm (WOA) is one of the well-regarded metaheuristics that is widely used to solve different optimization problems. Despite the use of WOA in different fields of application as OPF, its effectiveness is decreased as the dimension size of the test system is increased. Therefore, in this paper, an effective whale optimization algorithm for solving optimal power flow problems (EWOA-OPF) is proposed. The main goal of this enhancement is to improve the exploration ability and maintain a proper balance between the exploration and exploitation of the canonical WOA. In the proposed algorithm, the movement strategy of whales is enhanced by introducing two new movement strategies: (1) encircling the prey using Levy motion and (2) searching for prey using Brownian motion that cooperate with canonical bubble-net attacking. To validate the proposed EWOA-OPF algorithm, a comparison among six well-known optimization algorithms is established to solve the OPF problem. All algorithms are used to optimize single- and multi-objective functions of the OPF under the system constraints. Standard IEEE 6-bus, IEEE 14-bus, IEEE 30-bus, and IEEE 118-bus test systems are used to evaluate the proposed EWOA-OPF and comparative algorithms for solving the OPF problem in diverse power system scale sizes. The comparison of results proves that the EWOA-OPF is able to solve single- and multi-objective OPF problems with better solutions than other comparative algorithms. Full article
(This article belongs to the Special Issue Big Data Analytics Using Artificial Intelligence)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

24 pages, 3254 KiB  
Review
Evolution of Machine Learning in Tuberculosis Diagnosis: A Review of Deep Learning-Based Medical Applications
by Manisha Singh, Gurubasavaraj Veeranna Pujar, Sethu Arun Kumar, Meduri Bhagyalalitha, Handattu Shankaranarayana Akshatha, Belal Abuhaija, Anas Ratib Alsoud, Laith Abualigah, Narasimha M. Beeraka and Amir H. Gandomi
Electronics 2022, 11(17), 2634; https://doi.org/10.3390/electronics11172634 - 23 Aug 2022
Cited by 27 | Viewed by 8364
Abstract
Tuberculosis (TB) is an infectious disease that has been a major menace to human health globally, causing millions of deaths yearly. Well-timed diagnosis and treatment are an arch to full recovery of the patient. Computer-aided diagnosis (CAD) has been a hopeful choice for [...] Read more.
Tuberculosis (TB) is an infectious disease that has been a major menace to human health globally, causing millions of deaths yearly. Well-timed diagnosis and treatment are an arch to full recovery of the patient. Computer-aided diagnosis (CAD) has been a hopeful choice for TB diagnosis. Many CAD approaches using machine learning have been applied for TB diagnosis, specific to the artificial intelligence (AI) domain, which has led to the resurgence of AI in the medical field. Deep learning (DL), a major branch of AI, provides bigger room for diagnosing deadly TB disease. This review is focused on the limitations of conventional TB diagnostics and a broad description of various machine learning algorithms and their applications in TB diagnosis. Furthermore, various deep learning methods integrated with other systems such as neuro-fuzzy logic, genetic algorithm, and artificial immune systems are discussed. Finally, multiple state-of-the-art tools such as CAD4TB, Lunit INSIGHT, qXR, and InferRead DR Chest are summarized to view AI-assisted future aspects in TB diagnosis. Full article
(This article belongs to the Special Issue Big Data Analytics Using Artificial Intelligence)
Show Figures

Figure 1

50 pages, 2627 KiB  
Review
Recent Advances in Harris Hawks Optimization: A Comparative Study and Applications
by Abdelazim G. Hussien, Laith Abualigah, Raed Abu Zitar, Fatma A. Hashim, Mohamed Amin, Abeer Saber, Khaled H. Almotairi and Amir H. Gandomi
Electronics 2022, 11(12), 1919; https://doi.org/10.3390/electronics11121919 - 20 Jun 2022
Cited by 42 | Viewed by 4716
Abstract
The Harris hawk optimizer is a recent population-based metaheuristics algorithm that simulates the hunting behavior of hawks. This swarm-based optimizer performs the optimization procedure using a novel way of exploration and exploitation and the multiphases of search. In this review research, we focused [...] Read more.
The Harris hawk optimizer is a recent population-based metaheuristics algorithm that simulates the hunting behavior of hawks. This swarm-based optimizer performs the optimization procedure using a novel way of exploration and exploitation and the multiphases of search. In this review research, we focused on the applications and developments of the recent well-established robust optimizer Harris hawk optimizer (HHO) as one of the most popular swarm-based techniques of 2020. Moreover, several experiments were carried out to prove the powerfulness and effectivness of HHO compared with nine other state-of-art algorithms using Congress on Evolutionary Computation (CEC2005) and CEC2017. The literature review paper includes deep insight about possible future directions and possible ideas worth investigations regarding the new variants of the HHO algorithm and its widespread applications. Full article
(This article belongs to the Special Issue Big Data Analytics Using Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop