Frontiers in Artificial Intelligence

A special issue of AI (ISSN 2673-2688).

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 32990

Special Issue Editor

Biomedical Artificial Intelligence Research Unit (BMAI), Institute of Innovative Research, Tokyo Institute of Technology, Yokohama 226-8503, Japan
Interests: machine learning; deep learning; artificial intelligence; medical image analysis; medical imaging; computer-aided diagnosis; signal and image processing; computer vision
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

As a branch of computer science, Artificial Intelligence (AI) attempts to understand the essence of intelligence and to produce new kinds of intelligent machines that can respond in a similar way to human intelligence. Because of the recent breakthroughs of deep learning in the AI field, AI is the hottest topic in virtually all areas of research. People’s expectations for AI are very high, and it is said that AI could spur the 4th Industrial Revolution. We expect and encourage researchers to participate in this historical era of AI. AI is a rapidly growing, promising area of research that is getting into its prime.

This Special Issue, led by the Editor-in-Chief of AI, is open for submissions of feature papers with the aim of providing a platform for all innovative and frontier research that involves artificial intelligence (AI), including AI algorithms, AI software, AI fundamentals, AI theory, AI hardware, and AI applications. Topics of interests include but are not limited to:

  • Learning: machine learning, deep learning, data learning, reinforcement learning, federated learning, explainable machine learning
  • Reasoning: automated reasoning, knowledge representation, knowledge reasoning, fuzzy logic, expert systems, data mining, knowledge discovery
  • AI models: artificial neural networks, convolutional neural networks, decision trees, support vector machines, kernel machines, residual learning, generative adversarial networks, fuzzy models
  • Vision: computer vision, pattern recognition, machine perception, face recognition, fingerprint recognition, automated surveillance
  • Planning: multi-agent planning, multi-agent systems, automated planning, automated scheduling
  • Robotics: intelligent robots, man-machine interface, mechatronics, biomimetics, humanoid, brain-computer interface, smart controls
  • Language: natural language processing, text mining, question answering, machine translation, voice recognition, speech recognition, information retrieval
  • Hardware: AI chip, graphics processing unit (GPU), hardware architecture, hardware design, FPGA, ASICs, quantum computing
  • Applications: healthcare, medicine, automotive, industry, transportation, drugs, finance, cybersecurity, advertising, games, science, art

This Special Issue will include high-quality papers of cutting-edge studies, at the forefront of disseminating scientific knowledge and impactful discoveries in the field of artificial intelligence all around the world. We welcome your submissions.

Prof. Dr. Kenji Suzuki
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine and deep learning
  • knowledge reasoning and discovery
  • automated planning and scheduling
  • natural language processing and recognition
  • computer vision
  • robotics

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

29 pages, 62259 KiB  
Article
A Combination of Multilayer Perceptron, Radial Basis Function Artificial Neural Networks and Machine Learning Image Segmentation for the Dimension Reduction and the Prognosis Assessment of Diffuse Large B-Cell Lymphoma
by Joaquim Carreras, Yara Yukie Kikuti, Masashi Miyaoka, Shinichiro Hiraiwa, Sakura Tomita, Haruka Ikoma, Yusuke Kondo, Atsushi Ito, Naoya Nakamura and Rifat Hamoudi
AI 2021, 2(1), 106-134; https://doi.org/10.3390/ai2010008 - 08 Mar 2021
Cited by 24 | Viewed by 5329
Abstract
The prognosis of diffuse large B-cell lymphoma (DLBCL) is heterogeneous. Therefore, we aimed to highlight predictive biomarkers. First, artificial intelligence was applied into a discovery series of gene expression of 414 patients (GSE10846). A dimension reduction algorithm aimed to correlate with the overall [...] Read more.
The prognosis of diffuse large B-cell lymphoma (DLBCL) is heterogeneous. Therefore, we aimed to highlight predictive biomarkers. First, artificial intelligence was applied into a discovery series of gene expression of 414 patients (GSE10846). A dimension reduction algorithm aimed to correlate with the overall survival and other clinicopathological variables; and included a combination of Multilayer Perceptron (MLP) and Radial Basis Function (RBF) artificial neural networks, gene-set enrichment analysis (GSEA), Cox regression and other machine learning and predictive analytics modeling [C5.0 algorithm, logistic regression, Bayesian Network, discriminant analysis, random trees, tree-AS, Chi-squared Automatic Interaction Detection CHAID tree, Quest, classification and regression (C&R) tree and neural net)]. From an initial 54,613 gene-probes, a set of 488 genes and a final set of 16 genes were defined. Secondly, two identified markers of the immune checkpoint, PD-L1 (CD274) and IKAROS (IKZF4), were validated in an independent series from Tokai University, and the immunohistochemical expression was quantified, using a machine-learning-based Weka segmentation. High PD-L1 associated with poor overall and progression-free survival, non-GCB phenotype, Epstein–Barr virus infection (EBER+), high RGS1 expression and several clinicopathological variables, such as high IPI and absence of clinical response. Conversely, high expression of IKAROS was associated with a good overall and progression-free survival, GCB phenotype and a positive clinical response to treatment. Finally, the set of 16 genes (PAF1, USP28, SORT1, MAP7D3, FITM2, CENPO, PRCC, ALDH6A1, CSNK2A1, TOR1AIP1, NUP98, UBE2H, UBXN7, SLC44A2, NR2C2AP and LETM1), in combination with PD-L1, IKAROS, BCL2, MYC, CD163 and TNFAIP8, predicted the survival outcome of DLBCL with an overall accuracy of 82.1%. In conclusion, building predictive models of DLBCL is a feasible analytical strategy. Full article
(This article belongs to the Special Issue Frontiers in Artificial Intelligence)
Show Figures

Figure 1

23 pages, 28729 KiB  
Article
A Biologically Motivated, Proto-Object-Based Audiovisual Saliency Model
by Sudarshan Ramenahalli
AI 2020, 1(4), 487-509; https://doi.org/10.3390/ai1040030 - 03 Nov 2020
Cited by 3 | Viewed by 3397
Abstract
The natural environment and our interaction with it are essentially multisensory, where we may deploy visual, tactile and/or auditory senses to perceive, learn and interact with our environment. Our objective in this study is to develop a scene analysis algorithm using multisensory information, [...] Read more.
The natural environment and our interaction with it are essentially multisensory, where we may deploy visual, tactile and/or auditory senses to perceive, learn and interact with our environment. Our objective in this study is to develop a scene analysis algorithm using multisensory information, specifically vision and audio. We develop a proto-object-based audiovisual saliency map (AVSM) for the analysis of dynamic natural scenes. A specialized audiovisual camera with 360 field of view, capable of locating sound direction, is used to collect spatiotemporally aligned audiovisual data. We demonstrate that the performance of a proto-object-based audiovisual saliency map in detecting and localizing salient objects/events is in agreement with human judgment. In addition, the proto-object-based AVSM that we compute as a linear combination of visual and auditory feature conspicuity maps captures a higher number of valid salient events compared to unisensory saliency maps. Such an algorithm can be useful in surveillance, robotic navigation, video compression and related applications. Full article
(This article belongs to the Special Issue Frontiers in Artificial Intelligence)
Show Figures

Figure 1

23 pages, 26735 KiB  
Article
Comparing U-Net Based Models for Denoising Color Images
by Rina Komatsu and Tad Gonsalves
AI 2020, 1(4), 465-486; https://doi.org/10.3390/ai1040029 - 12 Oct 2020
Cited by 14 | Viewed by 10900
Abstract
Digital images often become corrupted by undesirable noise during the process of acquisition, compression, storage, and transmission. Although the kinds of digital noise are varied, current denoising studies focus on denoising only a single and specific kind of noise using a devoted deep-learning [...] Read more.
Digital images often become corrupted by undesirable noise during the process of acquisition, compression, storage, and transmission. Although the kinds of digital noise are varied, current denoising studies focus on denoising only a single and specific kind of noise using a devoted deep-learning model. Lack of generalization is a major limitation of these models. They cannot be extended to filter image noises other than those for which they are designed. This study deals with the design and training of a generalized deep learning denoising model that can remove five different kinds of noise from any digital image: Gaussian noise, salt-and-pepper noise, clipped whites, clipped blacks, and camera shake. The denoising model is constructed on the standard segmentation U-Net architecture and has three variants—U-Net with Group Normalization, Residual U-Net, and Dense U-Net. The combination of adversarial and L1 norm loss function re-produces sharply denoised images and show performance improvement over the standard U-Net, Denoising Convolutional Neural Network (DnCNN), and Wide Interface Network (WIN5RB) denoising models. Full article
(This article belongs to the Special Issue Frontiers in Artificial Intelligence)
Show Figures

Figure 1

19 pages, 2102 KiB  
Article
A Single Gene Expression Set Derived from Artificial Intelligence Predicted the Prognosis of Several Lymphoma Subtypes; and High Immunohistochemical Expression of TNFAIP8 Associated with Poor Prognosis in Diffuse Large B-Cell Lymphoma
by Joaquim Carreras, Yara Y. Kikuti, Masashi Miyaoka, Shinichiro Hiraiwa, Sakura Tomita, Haruka Ikoma, Yusuke Kondo, Atsushi Ito, Sawako Shiraiwa, Rifat Hamoudi, Kiyoshi Ando and Naoya Nakamura
AI 2020, 1(3), 342-360; https://doi.org/10.3390/ai1030023 - 21 Jul 2020
Cited by 15 | Viewed by 4075
Abstract
Objective: We have recently identified using multilayer perceptron analysis (artificial intelligence) a set of 25 genes with prognostic relevance in diffuse large B-cell lymphoma (DLBCL), but the importance of this set in other hematological neoplasia remains unknown. Methods and Results: We tested this [...] Read more.
Objective: We have recently identified using multilayer perceptron analysis (artificial intelligence) a set of 25 genes with prognostic relevance in diffuse large B-cell lymphoma (DLBCL), but the importance of this set in other hematological neoplasia remains unknown. Methods and Results: We tested this set of genes (i.e., ALDOB, ARHGAP19, ARMH3, ATF6B, CACNA1B, DIP2A, EMC9, ENO3, GGA3, KIF23, LPXN, MESD, METTL21A, POLR3H, RAB7A, RPS23, SERPINB8, SFTPC, SNN, SPACA9, SWSAP1, SZRD1, TNFAIP8, WDCP and ZSCAN12) in a large series of gene expression comprised of 2029 cases, selected from available databases, that included chronic lymphocytic leukemia (CLL, n = 308), mantle cell lymphoma (MCL, n = 92), follicular lymphoma (FL, n = 180), DLBCL (n = 741), multiple myeloma (MM, n = 559) and acute myeloid leukemia (AML, n = 149). Using a risk-score formula we could predict the overall survival of the patients: the hazard-ratio of high-risk versus low-risk groups for all the cases was 3.2 and per disease subtype were as follows: CLL (4.3), MCL (5.2), FL (3.0), DLBCL not otherwise specified (NOS) (4.5), multiple myeloma (MM) (5.3) and AML (3.7) (all p values < 0.000001). All 25 genes contributed to the risk-score, but their weight and direction of the correlation was variable. Among them, the most relevant were ENO3, TNFAIP8, ATF6B, METTL21A, KIF23 and ARHGAP19. Next, we validated TNFAIP8 (a negative mediator of apoptosis) in an independent series of 97 cases of DLBCL NOS from Tokai University Hospital. The protein expression by immunohistochemistry of TNFAIP8 was quantified using an artificial intelligence-based segmentation method and confirmed with a conventional RGB-based digital quantification. We confirmed that high protein expression of TNFAIP8 by the neoplastic B-lymphocytes associated with a poor overall survival of the patients (hazard-risk 3.5; p = 0.018) as well as with other relevant clinicopathological variables including age >60 years, high serum levels of soluble IL2RA, a non-GCB phenotype (cell-of-origin Hans classifier), moderately higher MYC and Ki67 (proliferation index), and high infiltration of the immune microenvironment by CD163-positive tumor associated macrophages (CD163+TAMs). Conclusion: It is possible to predict the prognosis of several hematological neoplasia using a single gene-set derived from neural network analysis. High expression of TNFAIP8 is associated with poor prognosis of the patients in DLBCL. Full article
(This article belongs to the Special Issue Frontiers in Artificial Intelligence)
Show Figures

Graphical abstract

Other

Jump to: Research

17 pages, 857 KiB  
Opinion
The Ouroboros Model, Proposal for Self-Organizing General Cognition Substantiated
by Knud Thomsen
AI 2021, 2(1), 89-105; https://doi.org/10.3390/ai2010007 - 26 Feb 2021
Cited by 2 | Viewed by 4179
Abstract
The Ouroboros Model has been proposed as a biologically-inspired comprehensive cognitive architecture for general intelligence, comprising natural and artificial manifestations. The approach addresses very diverse fundamental desiderata of research in natural cognition and also artificial intelligence, AI. Here, it is described how the [...] Read more.
The Ouroboros Model has been proposed as a biologically-inspired comprehensive cognitive architecture for general intelligence, comprising natural and artificial manifestations. The approach addresses very diverse fundamental desiderata of research in natural cognition and also artificial intelligence, AI. Here, it is described how the postulated structures have met with supportive evidence over recent years. The associated hypothesized processes could remedy pressing problems plaguing many, and even the most powerful current implementations of AI, including in particular deep neural networks. Some selected recent findings from very different fields are summoned, which illustrate the status and substantiate the proposal. Full article
(This article belongs to the Special Issue Frontiers in Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop