Next Issue
Volume 4, March
Previous Issue
Volume 3, September
 
 

AI, Volume 3, Issue 4 (December 2022) – 14 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 1061 KiB  
Article
Estimation of Clinch Joint Characteristics Based on Limited Input Data Using Pre-Trained Metamodels
by Christoph Zirngibl, Benjamin Schleich and Sandro Wartzack
AI 2022, 3(4), 990-1006; https://doi.org/10.3390/ai3040059 - 08 Dec 2022
Viewed by 2038
Abstract
Given strict emission targets and legal requirements, especially in the automotive industry, environmentally friendly and simultaneously versatile applicable production technologies are gaining importance. In this regard, the use of mechanical joining processes, such as clinching, enable assembly sheet metals to achieve strength properties [...] Read more.
Given strict emission targets and legal requirements, especially in the automotive industry, environmentally friendly and simultaneously versatile applicable production technologies are gaining importance. In this regard, the use of mechanical joining processes, such as clinching, enable assembly sheet metals to achieve strength properties similar to those of established thermal joining technologies. However, to guarantee a high reliability of the generated joint connection, the selection of a best-fitting joining technology as well as the meaningful description of individual joint properties is essential. In the context of clinching, few contributions have to date investigated the metamodel-based estimation and optimization of joint characteristics, such as neck or interlock thickness, by applying machine learning and genetic algorithms. Therefore, several regression models have been trained on varying databases and amounts of input parameters. However, if product engineers can only provide limited data for a new joining task, such as incomplete information on applied joining tool dimensions, previously trained metamodels often reach their limits. This often results in a significant loss of prediction quality and leads to increasing uncertainties and inaccuracies within the metamodel-based design of a clinch joint connection. Motivated by this, the presented contribution investigates different machine learning algorithms regarding their ability to achieve a satisfying estimation accuracy on limited input data applying a statistically based feature selection method. Through this, it is possible to identify which regression models are suitable to predict clinch joint characteristics considering only a minimum set of required input features. Thus, in addition to the opportunity to decrease the training effort as well as the model complexity, the subsequent formulation of design equations can pave the way to a more versatile application and reuse of pretrained metamodels on varying tool configurations for a given clinch joining task. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

15 pages, 1759 KiB  
Article
Gamma Ray Source Localization for Time Projection Chamber Telescopes Using Convolutional Neural Networks
by Brandon Khek, Aashwin Mishra, Micah Buuck and Tom Shutt
AI 2022, 3(4), 975-989; https://doi.org/10.3390/ai3040058 - 30 Nov 2022
Viewed by 2090
Abstract
Diverse phenomena such as positron annihilation in the Milky Way, merging binary neutron stars, and dark matter can be better understood by studying their gamma ray emission. Despite their importance, MeV gamma rays have been poorly explored at sensitivities that would allow for [...] Read more.
Diverse phenomena such as positron annihilation in the Milky Way, merging binary neutron stars, and dark matter can be better understood by studying their gamma ray emission. Despite their importance, MeV gamma rays have been poorly explored at sensitivities that would allow for deeper insight into the nature of the gamma emitting objects. In response, a liquid argon time projection chamber (TPC) gamma ray instrument concept called GammaTPC has been proposed and promises exploration of the entire sky with a large field of view, large effective area, and high polarization sensitivity. Optimizing the pointing capability of this instrument is crucial and can be accomplished by leveraging convolutional neural networks to reconstruct electron recoil paths from Compton scattering events within the detector. In this investigation, we develop a machine learning model architecture to accommodate a large data set of high fidelity simulated electron tracks and reconstruct paths. We create two model architectures: one to predict the electron recoil track origin and one for the initial scattering direction. We find that these models predict the true origin and direction with extremely high accuracy, thereby optimizing the observatory’s estimates of the sky location of gamma ray sources. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

14 pages, 1072 KiB  
Commentary
An Ethical Framework for Artificial Intelligence and Sustainable Cities
by David Pastor-Escuredo, Philip Treleaven and Ricardo Vinuesa
AI 2022, 3(4), 961-974; https://doi.org/10.3390/ai3040057 - 25 Nov 2022
Cited by 4 | Viewed by 6412
Abstract
The digital revolution has brought ethical crossroads of technology and behavior, especially in the realm of sustainable cities. The need for a comprehensive and constructive ethical framework is emerging as digital platforms encounter trouble to articulate the transformations required to accomplish the sustainable [...] Read more.
The digital revolution has brought ethical crossroads of technology and behavior, especially in the realm of sustainable cities. The need for a comprehensive and constructive ethical framework is emerging as digital platforms encounter trouble to articulate the transformations required to accomplish the sustainable development goal (SDG) 11 (on sustainable cities), and the remainder of the related SDGs. The unequal structure of the global system leads to dynamic and systemic problems, which have a more significant impact on those that are most vulnerable. Ethical frameworks based only on the individual level are no longer sufficient as they lack the necessary articulation to provide solutions to the new systemic challenges. A new ethical vision of digitalization must comprise the understanding of the scales and complex interconnections among SDGs and the ongoing socioeconomic and industrial revolutions. Many of the current social systems are internally fragile and very sensitive to external factors and threats, which lead to unethical situations. Furthermore, the multilayered net-like social tissue generates clusters of influence and leadership that prevent communities from a proper development. Digital technology has also had an impact at the individual level, posing several risks including a more homogeneous and predictable humankind. To preserve the core of humanity, we propose an ethical framework to empower individuals centered on the cities and interconnected with the socioeconomic ecosystem and the environment through the complex relationships of the SDGs. Only by combining human-centered and collectiveness-oriented digital development will it be possible to construct new social models and interactions that are ethical. Thus, it is necessary to combine ethical principles with the digital innovation undergoing in all the dimensions of sustainability. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

13 pages, 895 KiB  
Article
Detecting Emotions behind the Screen
by Najla Alkaabi, Nazar Zaki, Heba Ismail and Manzoor Khan
AI 2022, 3(4), 948-960; https://doi.org/10.3390/ai3040056 - 22 Nov 2022
Cited by 4 | Viewed by 2175
Abstract
Students’ emotional health is a major contributor to educational success. Hence, to support students’ success in online learning platforms, we contribute with the development of an analysis of the emotional orientations and triggers in their text messages. Such analysis could be automated and [...] Read more.
Students’ emotional health is a major contributor to educational success. Hence, to support students’ success in online learning platforms, we contribute with the development of an analysis of the emotional orientations and triggers in their text messages. Such analysis could be automated and used for early detection of the emotional status of students. In our approach, we relied on transfer learning to train the model, using the pre-trained Bidirectional Encoder Representations from Transformers model (BERT). The model classified messages as positive, negative, or neutral. The transfer learning model was then used to classify a larger unlabeled dataset and fine-grained emotions in the negative messages only, using NRC lexicon. In our analysis to the results, we focused in discovering the dominant negative emotions expressed and the most common words students used to express them. We believe this can be an important clue or first line of detection that may assist mental health practitioners to develop targeted programs for students, especially with the massive shift to online education due to the COVID-19 pandemic. We compared our model to a state-of-the-art ML-based model and found our model outperformed the other by achieving a 91% accuracy compared to an 86%. To the best of our knowledge, this is the first study to focus on a mental health analysis of students in online educational platforms other than massive open online courses (MOOCs). Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

17 pages, 7848 KiB  
Article
A Patient-Specific Algorithm for Lung Segmentation in Chest Radiographs
by Manawaduge Supun De Silva, Barath Narayanan Narayanan and Russell C. Hardie
AI 2022, 3(4), 931-947; https://doi.org/10.3390/ai3040055 - 18 Nov 2022
Cited by 2 | Viewed by 2776
Abstract
Lung segmentation plays an important role in computer-aided detection and diagnosis using chest radiographs (CRs). Currently, the U-Net and DeepLabv3+ convolutional neural network architectures are widely used to perform CR lung segmentation. To boost performance, ensemble methods are often used, whereby probability map [...] Read more.
Lung segmentation plays an important role in computer-aided detection and diagnosis using chest radiographs (CRs). Currently, the U-Net and DeepLabv3+ convolutional neural network architectures are widely used to perform CR lung segmentation. To boost performance, ensemble methods are often used, whereby probability map outputs from several networks operating on the same input image are averaged. However, not all networks perform adequately for any specific patient image, even if the average network performance is good. To address this, we present a novel multi-network ensemble method that employs a selector network. The selector network evaluates the segmentation outputs from several networks; on a case-by-case basis, it selects which outputs are fused to form the final segmentation for that patient. Our candidate lung segmentation networks include U-Net, with five different encoder depths, and DeepLabv3+, with two different backbone networks (ResNet50 and ResNet18). Our selector network is a ResNet18 image classifier. We perform all training using the publicly available Shenzhen CR dataset. Performance testing is carried out with two independent publicly available CR datasets, namely, Montgomery County (MC) and Japanese Society of Radiological Technology (JSRT). Intersection-over-Union scores for the proposed approach are 13% higher than the standard averaging ensemble method on MC and 5% better on JSRT. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

41 pages, 13750 KiB  
Article
War Game between Two Matched Fleets with Goal Options and Tactical Optimization
by Zhi-Xiang Jia and Jean-Fu Kiang
AI 2022, 3(4), 890-930; https://doi.org/10.3390/ai3040054 - 14 Nov 2022
Viewed by 2169
Abstract
A war game between two matched fleets of equal size and capability is designed and simulated in this work. Each fleet is composed of a carrier vessel (CV), a guided missile cruiser (CG), and two guided-missile destroyers (DDGs). Each vessel is equipped with [...] Read more.
A war game between two matched fleets of equal size and capability is designed and simulated in this work. Each fleet is composed of a carrier vessel (CV), a guided missile cruiser (CG), and two guided-missile destroyers (DDGs). Each vessel is equipped with specific weapons, including fighters, missiles, and close-in weapon system (CIWS), to carry out tactical operations. The maneuverability, maximum flying distance, and kill probability of different weapons are specified. Three goal options, a defense option and two more aggressive ones, are available to each fleet. A particle-pair swarm optimization (P2SO) algorithm is proposed to optimize the tactical parameters of both fleets concurrently according to their chosen options. The parameters to be optimized include take-off time delay of fighters, launch time delay of anti-ship missiles (ASHMs), and initial flying directions of fighters and ASHMs, respectively. All six possible contests between options are simulated and analyzed in terms of payoff, impact scores on CV, CG, DDG, and the number of lost fighters. Some interesting outlier cases are inspected to gain some insights on this game. Full article
Show Figures

Figure 1

19 pages, 3055 KiB  
Article
Male and Female Hormone Reading to Predict Pregnancy Percentage Using a Deep Learning Technique: A Real Case Study
by Lara Shboul, Kamil Fram, Saleh Sharaeh, Mohammad Alshraideh, Nancy Shaar and Njwan Alshraideh
AI 2022, 3(4), 871-889; https://doi.org/10.3390/ai3040053 - 24 Oct 2022
Cited by 1 | Viewed by 2147
Abstract
Diagnosing gynecological diseases is a significant difficulty for the medical sector. Numerous patients visit gynecological clinics for pregnancies as well as for other illnesses, such as polycystic ovarian syndrome, ovarian cysts, endometritis, menopause, and others. In relation to pregnancy, patients, whether they are [...] Read more.
Diagnosing gynecological diseases is a significant difficulty for the medical sector. Numerous patients visit gynecological clinics for pregnancies as well as for other illnesses, such as polycystic ovarian syndrome, ovarian cysts, endometritis, menopause, and others. In relation to pregnancy, patients, whether they are men, women, or both, may experience a variety of issues. As a result, in this research, we developed a proposed method that makes use of artificial neural networks (ANN) to help gynecologists predict the success rate of a pregnancy based on the reading of the pregnancy hormone ratio in the blood. The ANN was used in this test in the lab as a group of multiple perceptrons or neurons at each layer; however, in the final hidden layer, the genetic algorithm (GA) and Bat algorithm were used instead. These two algorithms are fit and appropriate for optimizing the models that are aimed to estimate or predict a value. As a result, the GA attempts to determine the testing cost using equations and the Bat algorithm attempts to determine the training cost. To improve the performance of the ANN, the GA algorithm collaborates with the Bat algorithm in a hybrid approach in the hidden layer of ANN; therefore, the pregnancy prediction result of using this method can be improved, optimized, and more accurate. Based on the flexibility of each algorithm, gynecologists can predict the success rate of a pregnancy. With the help of our methods, we were able to run experiments using data collected from 35,207 patients and reach a classification accuracy of 96.5%. These data were gathered from the Department of Obstetrics and Gynecology at the Hospital University of Jordan (HUJ). The proposed method aimed to predict the pregnancy rate of success regardless of whether the data are comprised of patients whose pregnancy hormones are in the normal range or of patients that suffer from factors favoring sterility, such as infections, malformations, and associated diseases (e.g., diabetes). Full article
Show Figures

Figure 1

8 pages, 256 KiB  
Article
Structural Model Based on Genetic Algorithm for Inhibiting Fatty Acid Amide Hydrolase
by Cosmin Trif, Dragos Paul Mihai, Anca Zanfirescu and George Mihai Nitulescu
AI 2022, 3(4), 863-870; https://doi.org/10.3390/ai3040052 - 13 Oct 2022
Cited by 1 | Viewed by 1820
Abstract
The fatty acid amide hydrolase (FAAH) is an enzyme responsible for the degradation of anandamide, an endocannabinoid. Pharmacologically blocking this target can lead to anxiolytic effects; therefore, new inhibitors can improve therapy in this field. In order to speed up the process of [...] Read more.
The fatty acid amide hydrolase (FAAH) is an enzyme responsible for the degradation of anandamide, an endocannabinoid. Pharmacologically blocking this target can lead to anxiolytic effects; therefore, new inhibitors can improve therapy in this field. In order to speed up the process of drug discovery, various in silico methods can be used, such as molecular docking, quantitative structure–activity relationship models (QSAR), and artificial intelligence (AI) classification algorithms. Besides architecture, one important factor for an AI model with high accuracy is the dataset quality. This issue can be solved by a genetic algorithm that can select optimal features for the prediction. The objective of the current study is to use this feature selection method in order to identify the most relevant molecular descriptors that can be used as independent variables, thus improving the efficacy of AI algorithms that can predict FAAH inhibitors. The model that used features chosen by the genetic algorithm had better accuracy than the model that used all molecular descriptors generated by the CDK descriptor calculator 1.4.6 software. Hence, carefully selecting the input data used by AI classification algorithms by using a GA is a promising strategy in drug development. Full article
(This article belongs to the Special Issue Feature Papers for AI)
19 pages, 1194 KiB  
Article
Who Was Wrong? An Object Detection Based Responsibility Assessment System for Crossroad Vehicle Collisions
by Helton Agbewonou Yawovi, Masato Kikuchi and Tadachika Ozono
AI 2022, 3(4), 844-862; https://doi.org/10.3390/ai3040051 - 09 Oct 2022
Cited by 1 | Viewed by 2752
Abstract
Car crashes, known also as vehicle collisions, are recurrent events that occur every day. As long as vehicles exist, vehicle collisions will, unfortunately, continue to occur. When a car crash occurs, it is important to investigate and determine the actors’ responsibilities. The police [...] Read more.
Car crashes, known also as vehicle collisions, are recurrent events that occur every day. As long as vehicles exist, vehicle collisions will, unfortunately, continue to occur. When a car crash occurs, it is important to investigate and determine the actors’ responsibilities. The police in charge of that task, as well as claims adjusters, usually process manually by going to the crash spot, collecting data on the field, drafting the crash, and assessing responsibilities based on road rules. This later task of assessing responsibilities usually takes time and requires enough knowledge of road rules. With the aim to support the police and claims adjusters and simplify the process of responsibility determination, we built a system that can automatically assess actors’ responsibilities within a crossroad crash. The system is mainly based on image detection and uses a rule-based knowledge system to assess responsibilities within driving recorders’ videos. It uses the crash video recorded by one of the involved vehicles’ driving recorders as the input data source and outputs the evaluation of each actor’s responsibility within the crash. The rule-based knowledge system was implemented to make the reasoning about responsibility assessment clear and allow users to easily understand the reasons for the results. The system consists of three modules: (a) a crash time detection module, (b) a traffic sign detection module, and (c) a responsibility assessment module. To detect a crash within a video, we discovered that the simple application of existing vehicle detection models would result in wrong detections with many false positives. To solve the issue, we made our proposed model take into account only the collided vehicle, its shape, and its position within the video. Moreover, with the biggest challenge being finding data to train the system’s detection models, we built our own dataset from scratch with more than 1500 images of head-on car crashes within the context of crossroad accidents taken by the driving recorder of one of the vehicles involved in the crash. The experiment’s results show how the system performs in (1) detecting the crash time, (2) detecting traffic signs, and (3) assessing each party’s responsibility. The system performs well when light conditions and the visibility of collided objects are good and traffic lights’ view distances are close. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

24 pages, 5131 KiB  
Article
A Neural Network-Based Fusion Approach for Improvement of SAR Interferometry-Based Digital Elevation Models in Plain and Hilly Regions of India
by Priti Girohi and Ashutosh Bhardwaj
AI 2022, 3(4), 820-843; https://doi.org/10.3390/ai3040050 - 09 Oct 2022
Cited by 3 | Viewed by 2248
Abstract
Interferometry Synthetic Aperture Radar (InSAR) is an advanced remote sensing technique for studying the earth’s surface topography and deformations; it is used to generate high-quality Digital Elevation Models (DEMs). DEMs are a crucial and primary input to various topographical quantification and modelling applications. [...] Read more.
Interferometry Synthetic Aperture Radar (InSAR) is an advanced remote sensing technique for studying the earth’s surface topography and deformations; it is used to generate high-quality Digital Elevation Models (DEMs). DEMs are a crucial and primary input to various topographical quantification and modelling applications. The quality of input DEMs can be further improved using fusion methods, which combine multi-sensor or multi-temporal datasets intelligently to retrieve the best information from the input data. This research study is based on developing a Neural Network-based fusion approach for improving InSAR-based DEMs in plain and hilly terrain parts of India. The study areas comprise relatively plain terrain from Ghaziabad and hilly terrain of Dehradun and their surrounding regions. The training dataset consists of DEM elevations and derived topographic attributes like slope, aspect, topographic position index (TPI), terrain ruggedness index (TRI), and vector roughness measure (VRM) in different land use land cover classes of the study areas. The spaceborne altimetry ICESat-2 ATL08 photon data are used as a reference elevation. A Feed Forward Neural Network with a backpropagation algorithm is trained based on the prepared training samples. The trained model produces fused DEMs by learning the relationship between the input and target samples; this is used to predict elevations for the test areas. The accuracy of results from the models is assessed with TanDEM-X 90 m DEM. The fused DEMs show significant improvement in terms of RMSE (Root Mean Square Error) over the input DEMs with an improvement factor of 94.65% in plain areas and 82.62% in hilly areas. The study concludes that the ANN with its universal approximation property can significantly improve the fused DEM. Full article
Show Figures

Figure 1

11 pages, 337 KiB  
Article
Dimensionality Reduction Statistical Models for Soil Attribute Prediction Based on Raw Spectral Data
by Marcelo Chan Fu Wei, Ricardo Canal Filho, Tiago Rodrigues Tavares, José Paulo Molin and Afrânio Márcio Corrêa Vieira
AI 2022, 3(4), 809-819; https://doi.org/10.3390/ai3040049 - 30 Sep 2022
Cited by 1 | Viewed by 1934
Abstract
To obtain a better performance when modeling soil spectral data for attribute prediction, researchers frequently resort to data pretreatment, aiming to reduce noise and highlight the spectral features. Even with the awareness of the existence of dimensionality reduction statistical approaches that can cope [...] Read more.
To obtain a better performance when modeling soil spectral data for attribute prediction, researchers frequently resort to data pretreatment, aiming to reduce noise and highlight the spectral features. Even with the awareness of the existence of dimensionality reduction statistical approaches that can cope with data sparse dimensionality, few studies have explored its applicability in soil sensing. Therefore, this study’s objective was to assess the predictive performance of two dimensionality reduction statistical models that are not widespread in the proximal soil sensing community: principal components regression (PCR) and least absolute shrinkage and selection operator (lasso). Here, these two approaches were compared with multiple linear regressions (MLR). All of the modelling strategies were applied without employing pretreatment techniques for soil attribute determination using X-ray fluorescence spectroscopy (XRF) and visible and near-infrared diffuse reflectance spectroscopy (Vis-NIR) data. In addition, the achieved results were compared against the ones reported in the literature that applied pretreatment techniques. The study was carried out with 102 soil samples from two distinct fields. Predictive models were developed for nine chemical and physical soil attributes, using lasso, PCR and MLR. Both Vis-NIR and XRF raw spectral data presented a great performance for soil attribute prediction when modelled with PCR and the lasso method. In general, similar results were found comparing the root mean squared error (RMSE) and coefficient of determination (R2) from the literature that applied pretreatment techniques and this study. For example, considering base saturation (V%), for Vis-NIR combined with PCR, in this study, RMSE and R2 values of 10.60 and 0.79 were found compared with 10.38 and 0.80, respectively, in the literature. In addition, looking at potassium (K), XRF associated with lasso yielded an RMSE value of 0.60 and R2 of 0.92, and in the literature, RMSE and R2 of 0.53 and 0.95, respectively, were found. The major discrepancy was observed for phosphorus (P) and organic matter (OM) prediction applying PCR in the XRF data, which showed R2 of 0.33 (for P) and 0.52 (for OM) without using pretreatment techniques in this study, and R2 of 0.01 (for P) and 0.74 (for OM) when using preprocessing techniques in the literature. These results indicate that data pretreatment can be disposable for predicting some soil attributes when using Vis-NIR and XRF raw data modeled with dimensionality reduction statistical models. Despite this, there is no consensus on the best way to calibrate data, as this seems to be attribute and area specific. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
13 pages, 356 KiB  
Article
Model Soups for Various Training and Validation Data
by Kaiyu Suzuki and Tomofumi Matsuzawa
AI 2022, 3(4), 796-808; https://doi.org/10.3390/ai3040048 - 28 Sep 2022
Cited by 1 | Viewed by 2530
Abstract
Model soups synthesize multiple models after fine-tuning them with different hyperparameters based on the accuracy of the validation data. They train different models on the same training and validation data sets. In this study, we maximized the model fine-tuning accuracy using the inference [...] Read more.
Model soups synthesize multiple models after fine-tuning them with different hyperparameters based on the accuracy of the validation data. They train different models on the same training and validation data sets. In this study, we maximized the model fine-tuning accuracy using the inference time and memory cost of a single model. We extended the model soups to create subsets of k training and validation data using a method similar to k-fold cross-validation and trained models on these subsets. First, we showed the correlation between the validation and test data when the models are synthesized, such that their training data contain validation data. Thereafter, we showed that synthesizing k of these models, after synthesizing models based on subsets of the same training and validation data, provides a single model with high test accuracy. This study provides a method for learning models with both high accuracy and reliability for small datasets such as medical images. Full article
(This article belongs to the Topic Recent Trends in Image Processing and Pattern Recognition)
Show Figures

Figure 1

7 pages, 3594 KiB  
Brief Report
A Pilot Study on the Use of Generative Adversarial Networks for Data Augmentation of Time Series
by Nicolas Morizet, Matteo Rizzato, David Grimbert and George Luta
AI 2022, 3(4), 789-795; https://doi.org/10.3390/ai3040047 - 26 Sep 2022
Viewed by 2039
Abstract
Data augmentation is needed to use Deep Learning methods for the typically small time series datasets. There is limited literature on the evaluation of the performance of the use of Generative Adversarial Networks for time series data augmentation. We describe and discuss the [...] Read more.
Data augmentation is needed to use Deep Learning methods for the typically small time series datasets. There is limited literature on the evaluation of the performance of the use of Generative Adversarial Networks for time series data augmentation. We describe and discuss the results of a pilot study that extends a recent evaluation study of two families of data augmentation methods for time series (i.e., transformation-based methods and pattern-mixing methods), and provide recommendations for future work in this important area of research. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

11 pages, 535 KiB  
Communication
AI and We in the Future in the Light of the Ouroboros Model: A Plea for Plurality
by Knud Thomsen
AI 2022, 3(4), 778-788; https://doi.org/10.3390/ai3040046 - 22 Sep 2022
Cited by 1 | Viewed by 2027
Abstract
Artificial Intelligence (AI) is set to play an ever more important role in our lives and societies. Here, some boundary conditions and possibilities for shaping and using AI as well as advantageously embedding it in daily life are sketched. On the basis of [...] Read more.
Artificial Intelligence (AI) is set to play an ever more important role in our lives and societies. Here, some boundary conditions and possibilities for shaping and using AI as well as advantageously embedding it in daily life are sketched. On the basis of a recently proposed cognitive architecture that claims to deliver a general layout for both natural intelligence and general AI, a coarse but broad perspective is developed and an emphasis is put on AI ethics. A number of findings, requirements, and recommendations are derived that can transparently be traced to the hypothesized structure and the procedural operation of efficient cognitive agents according to the Ouroboros Model. Including all of the available and possibly relevant information for any action and respecting a “negative imperative” are the most important resulting recommendations. Self-consistency, continual monitoring, equitable considerations, accountability, flexibility, and pragmatic adaptations are highlighted as foundations and, at the same time, mandatory consequences for timely answers to the most relevant questions concerning the embedding of AI in society and ethical rules for this. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop