Next Issue
Volume 5, March
Previous Issue
Volume 4, September
 
 

AI, Volume 4, Issue 4 (December 2023) – 15 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 896 KiB  
Article
Adapting the Parameters of RBF Networks Using Grammatical Evolution
by Ioannis G. Tsoulos, Alexandros Tzallas and Evangelos Karvounis
AI 2023, 4(4), 1059-1078; https://doi.org/10.3390/ai4040054 - 11 Dec 2023
Viewed by 1187
Abstract
Radial basis function networks are widely used in a multitude of applications in various scientific areas in both classification and data fitting problems. These networks deal with the above problems by adjusting their parameters through various optimization techniques. However, an important issue to [...] Read more.
Radial basis function networks are widely used in a multitude of applications in various scientific areas in both classification and data fitting problems. These networks deal with the above problems by adjusting their parameters through various optimization techniques. However, an important issue to address is the need to locate a satisfactory interval for the parameters of a network before adjusting these parameters. This paper proposes a two-stage method. In the first stage, via the incorporation of grammatical evolution, rules are generated to create the optimal value interval of the network parameters. During the second stage of the technique, the mentioned parameters are fine-tuned with a genetic algorithm. The current work was tested on a number of datasets from the recent literature and found to reduce the classification or data fitting error by over 40% on most datasets. In addition, the proposed method appears in the experiments to be robust, as the fluctuation of the number of network parameters does not significantly affect its performance. Full article
Show Figures

Figure 1

23 pages, 1296 KiB  
Article
Evaluating the Performance of Automated Machine Learning (AutoML) Tools for Heart Disease Diagnosis and Prediction
by Lauren M. Paladino, Alexander Hughes, Alexander Perera, Oguzhan Topsakal and Tahir Cetin Akinci
AI 2023, 4(4), 1036-1058; https://doi.org/10.3390/ai4040053 - 01 Dec 2023
Cited by 4 | Viewed by 2273
Abstract
Globally, over 17 million people annually die from cardiovascular diseases, with heart disease being the leading cause of mortality in the United States. The ever-increasing volume of data related to heart disease opens up possibilities for employing machine learning (ML) techniques in diagnosing [...] Read more.
Globally, over 17 million people annually die from cardiovascular diseases, with heart disease being the leading cause of mortality in the United States. The ever-increasing volume of data related to heart disease opens up possibilities for employing machine learning (ML) techniques in diagnosing and predicting heart conditions. While applying ML demands a certain level of computer science expertise—often a barrier for healthcare professionals—automated machine learning (AutoML) tools significantly lower this barrier. They enable users to construct the most effective ML models without in-depth technical knowledge. Despite their potential, there has been a lack of research comparing the performance of different AutoML tools on heart disease data. Addressing this gap, our study evaluates three AutoML tools—PyCaret, AutoGluon, and AutoKeras—against three datasets (Cleveland, Hungarian, and a combined dataset). To evaluate the efficacy of AutoML against conventional machine learning methodologies, we crafted ten machine learning models using the standard practices of exploratory data analysis (EDA), data cleansing, feature engineering, and others, utilizing the sklearn library. Our toolkit included an array of models—logistic regression, support vector machines, decision trees, random forest, and various ensemble models. Employing 5-fold cross-validation, these traditionally developed models demonstrated accuracy rates spanning from 55% to 60%. This performance is markedly inferior to that of AutoML tools, indicating the latter’s superior capability in generating predictive models. Among AutoML tools, AutoGluon emerged as the superior tool, consistently achieving accuracy rates between 78% and 86% across the datasets. PyCaret’s performance varied, with accuracy rates from 65% to 83%, indicating a dependency on the nature of the dataset. AutoKeras showed the most fluctuation in performance, with accuracies ranging from 54% to 83%. Our findings suggest that AutoML tools can simplify the generation of robust ML models that potentially surpass those crafted through traditional ML methodologies. However, we must also consider the limitations of AutoML tools and explore strategies to overcome them. The successful deployment of high-performance ML models designed via AutoML could revolutionize the treatment and prevention of heart disease globally, significantly impacting patient care. Full article
Show Figures

Figure 1

13 pages, 235 KiB  
Essay
AI and Regulations
by Paul Dumouchel
AI 2023, 4(4), 1023-1035; https://doi.org/10.3390/ai4040052 - 29 Nov 2023
Viewed by 1698
Abstract
This essay argues that the popular misrepresentation of the nature of AI has important consequences concerning how we view the need for regulations. Considering AI as something that exists in itself, rather than as a set of cognitive technologies whose characteristics—physical, cognitive, and [...] Read more.
This essay argues that the popular misrepresentation of the nature of AI has important consequences concerning how we view the need for regulations. Considering AI as something that exists in itself, rather than as a set of cognitive technologies whose characteristics—physical, cognitive, and systemic—are quite different from ours (and that, at times, differ widely among the technologies) leads to inefficient approaches to regulation. This paper aims at helping the practitioners of responsible AI to address the way in which the technical aspects of the tools they are developing and promoting directly have important social and political consequences. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
13 pages, 3770 KiB  
Review
Chat GPT in Diagnostic Human Pathology: Will It Be Useful to Pathologists? A Preliminary Review with ‘Query Session’ and Future Perspectives
by Gerardo Cazzato, Marialessandra Capuzzolo, Paola Parente, Francesca Arezzo, Vera Loizzi, Enrica Macorano, Andrea Marzullo, Gennaro Cormio and Giuseppe Ingravallo
AI 2023, 4(4), 1010-1022; https://doi.org/10.3390/ai4040051 - 22 Nov 2023
Cited by 3 | Viewed by 2786
Abstract
The advent of Artificial Intelligence (AI) has in just a few years supplied multiple areas of knowledge, including in the medical and scientific fields. An increasing number of AI-based applications have been developed, among which conversational AI has emerged. Regarding the latter, ChatGPT [...] Read more.
The advent of Artificial Intelligence (AI) has in just a few years supplied multiple areas of knowledge, including in the medical and scientific fields. An increasing number of AI-based applications have been developed, among which conversational AI has emerged. Regarding the latter, ChatGPT has risen to the headlines, scientific and otherwise, for its distinct propensity to simulate a ‘real’ discussion with its interlocutor, based on appropriate prompts. Although several clinical studies using ChatGPT have already been published in the literature, very little has yet been written about its potential application in human pathology. We conduct a systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, using PubMed, Scopus and the Web of Science (WoS) as databases, with the following keywords: ChatGPT OR Chat GPT, in combination with each of the following: pathology, diagnostic pathology, anatomic pathology, before 31 July 2023. A total of 103 records were initially identified in the literature search, of which 19 were duplicates. After screening for eligibility and inclusion criteria, only five publications were ultimately included. The majority of publications were original articles (n = 2), followed by a case report (n = 1), letter to the editor (n = 1) and review (n = 1). Furthermore, we performed a ‘query session’ with ChatGPT regarding pathologies such as pigmented skin lesions, malignant melanoma and variants, Gleason’s score of prostate adenocarcinoma, differential diagnosis between germ cell tumors and high grade serous carcinoma of the ovary, pleural mesothelioma and pediatric diffuse midline glioma. Although the premises are exciting and ChatGPT is able to co-advise the pathologist in providing large amounts of scientific data for use in routine microscopic diagnostic practice, there are many limitations (such as data of training, amount of data available, ‘hallucination’ phenomena) that need to be addressed and resolved, with the caveat that an AI-driven system should always provide support and never a decision-making motive during the histopathological diagnostic process. Full article
Show Figures

Figure 1

14 pages, 31064 KiB  
Article
Enhancing Tuta absoluta Detection on Tomato Plants: Ensemble Techniques and Deep Learning
by Nikolaos Giakoumoglou, Eleftheria-Maria Pechlivani, Nikolaos Frangakis and Dimitrios Tzovaras
AI 2023, 4(4), 996-1009; https://doi.org/10.3390/ai4040050 - 20 Nov 2023
Cited by 2 | Viewed by 1410
Abstract
Early detection and efficient management practices to control Tuta absoluta (Meyrick) infestation is crucial for safeguarding tomato production yield and minimizing economic losses. This study investigates the detection of T. absoluta infestation on tomato plants using object detection models combined with ensemble techniques. [...] Read more.
Early detection and efficient management practices to control Tuta absoluta (Meyrick) infestation is crucial for safeguarding tomato production yield and minimizing economic losses. This study investigates the detection of T. absoluta infestation on tomato plants using object detection models combined with ensemble techniques. Additionally, this study highlights the importance of utilizing a dataset captured in real settings in open-field and greenhouse environments to address the complexity of real-life challenges in object detection of plant health scenarios. The effectiveness of deep-learning-based models, including Faster R-CNN and RetinaNet, was evaluated in terms of detecting T. absoluta damage. The initial model evaluations revealed diminishing performance levels across various model configurations, including different backbones and heads. To enhance detection predictions and improve mean Average Precision (mAP) scores, ensemble techniques were applied such as Non-Maximum Suppression (NMS), Soft Non-Maximum Suppression (Soft NMS), Non-Maximum Weighted (NMW), and Weighted Boxes Fusion (WBF). The outcomes shown that the WBF technique significantly improved the mAP scores, resulting in a 20% improvement from 0.58 (max mAP from individual models) to 0.70. The results of this study contribute to the field of agricultural pest detection by emphasizing the potential of deep learning and ensemble techniques in improving the accuracy and reliability of object detection models. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

26 pages, 452 KiB  
Article
Who Needs External References?—Text Summarization Evaluation Using Original Documents
by Abdullah Al Foysal and Ronald Böck
AI 2023, 4(4), 970-995; https://doi.org/10.3390/ai4040049 - 15 Nov 2023
Cited by 1 | Viewed by 1941
Abstract
Nowadays, individuals can be overwhelmed by a huge number of documents being present in daily life. Capturing the necessary details is often a challenge. Therefore, it is rather important to summarize documents to obtain the main information quickly. There currently exist automatic approaches [...] Read more.
Nowadays, individuals can be overwhelmed by a huge number of documents being present in daily life. Capturing the necessary details is often a challenge. Therefore, it is rather important to summarize documents to obtain the main information quickly. There currently exist automatic approaches to this task, but their quality is often not properly assessed. State-of-the-art metrics rely on human-generated summaries as a reference for the evaluation. If no reference is given, the assessment will be challenging. Therefore, in the absence of human-generated reference summaries, we investigated an alternative approach to how machine-generated summaries can be evaluated. For this, we focus on the original text or document to retrieve a metric that allows a direct evaluation of automatically generated summaries. This approach is particularly helpful in cases where it is difficult or costly to find reference summaries. In this paper, we present a novel metric called Summary Score without Reference—SUSWIR—which is based on four factors already known in the text summarization community: Semantic Similarity, Redundancy, Relevance, and Bias Avoidance Analysis, overcoming drawbacks of common metrics. Therefore, we aim to close a gap in the current evaluation environment for machine-generated text summaries. The novel metric is introduced theoretically and tested on five datasets from their respective domains. The conducted experiments yielded noteworthy outcomes, employing the utilization of SUSWIR. Full article
Show Figures

Figure 1

21 pages, 1002 KiB  
Article
Chatbots Put to the Test in Math and Logic Problems: A Comparison and Assessment of ChatGPT-3.5, ChatGPT-4, and Google Bard
by Vagelis Plevris, George Papazafeiropoulos and Alejandro Jiménez Rios
AI 2023, 4(4), 949-969; https://doi.org/10.3390/ai4040048 - 24 Oct 2023
Cited by 3 | Viewed by 5479
Abstract
In an age where artificial intelligence is reshaping the landscape of education and problem solving, our study unveils the secrets behind three digital wizards, ChatGPT-3.5, ChatGPT-4, and Google Bard, as they engage in a thrilling showdown of mathematical and logical prowess. We assess [...] Read more.
In an age where artificial intelligence is reshaping the landscape of education and problem solving, our study unveils the secrets behind three digital wizards, ChatGPT-3.5, ChatGPT-4, and Google Bard, as they engage in a thrilling showdown of mathematical and logical prowess. We assess the ability of the chatbots to understand the given problem, employ appropriate algorithms or methods to solve it, and generate coherent responses with correct answers. We conducted our study using a set of 30 questions. These questions were carefully crafted to be clear, unambiguous, and fully described using plain text only. Each question has a unique and well-defined correct answer. The questions were divided into two sets of 15: Set A consists of “Original” problems that cannot be found online, while Set B includes “Published” problems that are readily available online, often with their solutions. Each question was presented to each chatbot three times in May 2023. We recorded and analyzed their responses, highlighting their strengths and weaknesses. Our findings indicate that chatbots can provide accurate solutions for straightforward arithmetic, algebraic expressions, and basic logic puzzles, although they may not be consistently accurate in every attempt. However, for more complex mathematical problems or advanced logic tasks, the chatbots’ answers, although they appear convincing, may not be reliable. Furthermore, consistency is a concern as chatbots often provide conflicting answers when presented with the same question multiple times. To evaluate and compare the performance of the three chatbots, we conducted a quantitative analysis by scoring their final answers based on correctness. Our results show that ChatGPT-4 performs better than ChatGPT-3.5 in both sets of questions. Bard ranks third in the original questions of Set A, trailing behind the other two chatbots. However, Bard achieves the best performance, taking first place in the published questions of Set B. This is likely due to Bard’s direct access to the internet, unlike the ChatGPT chatbots, which, due to their designs, do not have external communication capabilities. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

23 pages, 9079 KiB  
Article
Deep Learning Performance Characterization on GPUs for Various Quantization Frameworks
by Muhammad Ali Shafique, Arslan Munir and Joonho Kong
AI 2023, 4(4), 926-948; https://doi.org/10.3390/ai4040047 - 18 Oct 2023
Viewed by 2011
Abstract
Deep learning is employed in many applications, such as computer vision, natural language processing, robotics, and recommender systems. Large and complex neural networks lead to high accuracy; however, they adversely affect many aspects of deep learning performance, such as training time, latency, throughput, [...] Read more.
Deep learning is employed in many applications, such as computer vision, natural language processing, robotics, and recommender systems. Large and complex neural networks lead to high accuracy; however, they adversely affect many aspects of deep learning performance, such as training time, latency, throughput, energy consumption, and memory usage in the training and inference stages. To solve these challenges, various optimization techniques and frameworks have been developed for the efficient performance of deep learning models in the training and inference stages. Although optimization techniques such as quantization have been studied thoroughly in the past, less work has been done to study the performance of frameworks that provide quantization techniques. In this paper, we have used different performance metrics to study the performance of various quantization frameworks, including TensorFlow automatic mixed precision and TensorRT. These performance metrics include training time and memory utilization in the training stage along with latency and throughput for graphics processing units (GPUs) in the inference stage. We have applied the automatic mixed precision (AMP) technique during the training stage using the TensorFlow framework, while for inference we have utilized the TensorRT framework for the post-training quantization technique using the TensorFlow TensorRT (TF-TRT) application programming interface (API).We performed model profiling for different deep learning models, datasets, image sizes, and batch sizes for both the training and inference stages, the results of which can help developers and researchers to devise and deploy efficient deep learning models for GPUs. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Image Processing and Computer Vision)
Show Figures

Figure 1

22 pages, 1593 KiB  
Article
From Trustworthy Principles to a Trustworthy Development Process: The Need and Elements of Trusted Development of AI Systems
by Ellen Hohma and Christoph Lütge
AI 2023, 4(4), 904-925; https://doi.org/10.3390/ai4040046 - 13 Oct 2023
Viewed by 1794
Abstract
The current endeavor of moving AI ethics from theory to practice can frequently be observed in academia and industry and indicates a major achievement in the theoretical understanding of responsible AI. Its practical application, however, currently poses challenges, as mechanisms for translating the [...] Read more.
The current endeavor of moving AI ethics from theory to practice can frequently be observed in academia and industry and indicates a major achievement in the theoretical understanding of responsible AI. Its practical application, however, currently poses challenges, as mechanisms for translating the proposed principles into easily feasible actions are often considered unclear and not ready for practice. In particular, a lack of uniform, standardized approaches that are aligned with regulatory provisions is often highlighted by practitioners as a major drawback to the practical realization of AI governance. To address these challenges, we propose a stronger shift in focus from solely the trustworthiness of AI products to the perceived trustworthiness of the development process by introducing a concept for a trustworthy development process for AI systems. We derive this process from a semi-systematic literature analysis of common AI governance documents to identify the most prominent measures for operationalizing responsible AI and compare them to implications for AI providers from EU-centered regulatory frameworks. Assessing the resulting process along derived characteristics of trustworthy processes shows that, while clarity is often mentioned as a major drawback, and many AI providers tend to wait for finalized regulations before reacting, the summarized landscape of proposed AI governance mechanisms can already cover many of the binding and non-binding demands circulating similar activities to address fundamental risks. Furthermore, while many factors of procedural trustworthiness are already fulfilled, limitations are seen particularly due to the vagueness of currently proposed measures, calling for a detailing of measures based on use cases and the system’s context. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

16 pages, 614 KiB  
Concept Paper
Algorithms for All: Can AI in the Mortgage Market Expand Access to Homeownership?
by Vanessa G. Perry, Kirsten Martin and Ann Schnare
AI 2023, 4(4), 888-903; https://doi.org/10.3390/ai4040045 - 11 Oct 2023
Cited by 1 | Viewed by 3160
Abstract
Artificial intelligence (AI) is transforming the mortgage market at every stage of the value chain. In this paper, we examine the potential for the mortgage industry to leverage AI to overcome the historical and systemic barriers to homeownership for members of Black, Brown, [...] Read more.
Artificial intelligence (AI) is transforming the mortgage market at every stage of the value chain. In this paper, we examine the potential for the mortgage industry to leverage AI to overcome the historical and systemic barriers to homeownership for members of Black, Brown, and lower-income communities. We begin by proposing societal, ethical, legal, and practical criteria that should be considered in the development and implementation of AI models. Based on this framework, we discuss the applications of AI that are transforming the mortgage market, including digital marketing, the inclusion of non-traditional “big data” in credit scoring algorithms, AI property valuation, and loan underwriting models. We conclude that although the current AI models may reflect the same biases that have existed historically in the mortgage market, opportunities exist for proactive, responsible AI model development designed to remove the systemic barriers to mortgage credit access. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

13 pages, 1669 KiB  
Communication
Can Artificial Intelligence Aid Diagnosis by Teleguided Point-of-Care Ultrasound? A Pilot Study for Evaluating a Novel Computer Algorithm for COVID-19 Diagnosis Using Lung Ultrasound
by Laith R. Sultan, Allison Haertter, Maryam Al-Hasani, George Demiris, Theodore W. Cary, Yale Tung-Chen and Chandra M. Sehgal
AI 2023, 4(4), 875-887; https://doi.org/10.3390/ai4040044 - 10 Oct 2023
Cited by 2 | Viewed by 1773
Abstract
With the 2019 coronavirus disease (COVID-19) pandemic, there is an increasing demand for remote monitoring technologies to reduce patient and provider exposure. One field that has an increasing potential is teleguided ultrasound, where telemedicine and point-of-care ultrasound (POCUS) merge to create this new [...] Read more.
With the 2019 coronavirus disease (COVID-19) pandemic, there is an increasing demand for remote monitoring technologies to reduce patient and provider exposure. One field that has an increasing potential is teleguided ultrasound, where telemedicine and point-of-care ultrasound (POCUS) merge to create this new scope. Teleguided POCUS can minimize staff exposure while preserving patient safety and oversight during bedside procedures. In this paper, we propose the use of teleguided POCUS supported by AI technologies for the remote monitoring of COVID-19 patients by non-experienced personnel including self-monitoring by the patients themselves. Our hypothesis is that AI technologies can facilitate the remote monitoring of COVID-19 patients through the utilization of POCUS devices, even when operated by individuals without formal medical training. In pursuit of this goal, we performed a pilot analysis to evaluate the performance of users with different clinical backgrounds using a computer-based system for COVID-19 detection using lung ultrasound. The purpose of the analysis was to emphasize the potential of the proposed AI technology for improving diagnostic performance, especially for users with less experience. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

31 pages, 1261 KiB  
Article
Anthropocentrism and Environmental Wellbeing in AI Ethics Standards: A Scoping Review and Discussion
by Eryn Rigley, Adriane Chapman, Christine Evers and Will McNeill
AI 2023, 4(4), 844-874; https://doi.org/10.3390/ai4040043 - 08 Oct 2023
Viewed by 2462
Abstract
As AI deployment has broadened, so too has an awareness for the ethical implications and problems that may ensue from this deployment. In response, groups across multiple domains have issued AI ethics standards that rely on vague, high-level principles to find consensus. One [...] Read more.
As AI deployment has broadened, so too has an awareness for the ethical implications and problems that may ensue from this deployment. In response, groups across multiple domains have issued AI ethics standards that rely on vague, high-level principles to find consensus. One such high-level principle that is common across the AI landscape is ‘human-centredness’, though oftentimes it is applied without due investigation into its merits and limitations and without a clear, common definition. This paper undertakes a scoping review of AI ethics standards to examine the commitment to ‘human-centredness’ and how this commitment interacts with other ethical concerns, namely, concerns for nonhumans animals and environmental wellbeing. We found that human-centred AI ethics standards tend to prioritise humans over nonhumans more so than nonhuman-centred standards. A critical analysis of our findings suggests that a commitment to human-centredness within AI ethics standards accords with the definition of anthropocentrism in moral philosophy: that humans have, at least, more intrinsic moral value than nonhumans. We consider some of the limitations of anthropocentric AI ethics, which include permitting harm to the environment and animals and undermining the stability of ecosystems. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

13 pages, 288 KiB  
Review
Ethics and Transparency Issues in Digital Platforms: An Overview
by Leilasadat Mirghaderi, Monika Sziron and Elisabeth Hildt
AI 2023, 4(4), 831-843; https://doi.org/10.3390/ai4040042 - 28 Sep 2023
Cited by 1 | Viewed by 3452
Abstract
There is an ever-increasing application of digital platforms that utilize artificial intelligence (AI) in our daily lives. In this context, the matters of transparency and accountability remain major concerns that are yet to be effectively addressed. The aim of this paper is to [...] Read more.
There is an ever-increasing application of digital platforms that utilize artificial intelligence (AI) in our daily lives. In this context, the matters of transparency and accountability remain major concerns that are yet to be effectively addressed. The aim of this paper is to identify the zones of non-transparency in the context of digital platforms and provide recommendations for improving transparency issues on digital platforms. First, by surveying the literature and reflecting on the concept of platformization, choosing an AI definition that can be adopted by different stakeholders, and utilizing AI ethics, we will identify zones of non-transparency in the context of digital platforms. Second, after identifying the zones of non-transparency, we go beyond a mere summary of existing literature and provide our perspective on how to address the raised concerns. Based on our survey of the literature, we find that three major zones of non-transparency exist in digital platforms. These include a lack of transparency with regard to who contributes to platforms; lack of transparency with regard to who is working behind platforms, the contributions of those workers, and the working conditions of digital workers; and lack of transparency with regard to how algorithms are developed and governed. Considering the abundance of high-level principles in the literature that cannot be easily operationalized, this is an attempt to bridge the gap between principles and operationalization. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
19 pages, 3076 KiB  
Article
A General Machine Learning Model for Assessing Fruit Quality Using Deep Image Features
by Ioannis D. Apostolopoulos, Mpesi Tzani and Sokratis I. Aznaouridis
AI 2023, 4(4), 812-830; https://doi.org/10.3390/ai4040041 - 27 Sep 2023
Cited by 2 | Viewed by 4873
Abstract
Fruit quality is a critical factor in the produce industry, affecting producers, distributors, consumers, and the economy. High-quality fruits are more appealing, nutritious, and safe, boosting consumer satisfaction and revenue for producers. Artificial intelligence can aid in assessing the quality of fruit using [...] Read more.
Fruit quality is a critical factor in the produce industry, affecting producers, distributors, consumers, and the economy. High-quality fruits are more appealing, nutritious, and safe, boosting consumer satisfaction and revenue for producers. Artificial intelligence can aid in assessing the quality of fruit using images. This paper presents a general machine learning model for assessing fruit quality using deep image features. This model leverages the learning capabilities of the recent successful networks for image classification called vision transformers (ViT). The ViT model is built and trained with a combination of various fruit datasets and taught to distinguish between good and rotten fruit images based on their visual appearance and not predefined quality attributes. The general model demonstrated impressive results in accurately identifying the quality of various fruits, such as apples (with a 99.50% accuracy), cucumbers (99%), grapes (100%), kakis (99.50%), oranges (99.50%), papayas (98%), peaches (98%), tomatoes (99.50%), and watermelons (98%). However, it showed slightly lower performance in identifying guavas (97%), lemons (97%), limes (97.50%), mangoes (97.50%), pears (97%), and pomegranates (97%). Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

25 pages, 6143 KiB  
Article
Unveiling the Transparency of Prediction Models for Spatial PM2.5 over Singapore: Comparison of Different Machine Learning Approaches with eXplainable Artificial Intelligence
by M. S. Shyam Sunder, Vinay Anand Tikkiwal, Arun Kumar and Bhishma Tyagi
AI 2023, 4(4), 787-811; https://doi.org/10.3390/ai4040040 - 27 Sep 2023
Viewed by 1496
Abstract
Aerosols play a crucial role in the climate system due to direct and indirect effects, such as scattering and absorbing radiant energy. They also have adverse effects on visibility and human health. Humans are exposed to fine PM2.5, which has adverse [...] Read more.
Aerosols play a crucial role in the climate system due to direct and indirect effects, such as scattering and absorbing radiant energy. They also have adverse effects on visibility and human health. Humans are exposed to fine PM2.5, which has adverse health impacts related to cardiovascular and respiratory-related diseases. Long-term trends in PM concentrations are influenced by emissions and meteorological variations, while meteorological factors primarily drive short-term variations. Factors such as vegetation cover, relative humidity, temperature, and wind speed impact the divergence in the PM2.5 concentrations on the surface. Machine learning proved to be a good predictor of air quality. This study focuses on predicting PM2.5 with these parameters as input for spatial and temporal information. The work analyzes the in situ observations for PM2.5 over Singapore for seven years (2014–2021) at five locations, and these datasets are used for spatial prediction of PM2.5. The study aims to provide a novel framework based on temporal-based prediction using Random Forest (RF), Gradient Boosting (GB) regression, and Tree-based Pipeline Optimization Tool (TP) Auto ML works based on meta-heuristic via genetic algorithm. TP produced reasonable Global Performance Index values; 7.4 was the highest GPI value in August 2016, and the lowest was −0.6 in June 2019. This indicates the positive performance of the TP model; even the negative values are less than other models, denoting less pessimistic predictions. The outcomes are explained with the eXplainable Artificial Intelligence (XAI) techniques which help to investigate the fidelity of feature importance of the machine learning models to extract information regarding the rhythmic shift of the PM2.5 pattern. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop