Computational Linguistics and Artificial Intelligence

A special issue of Sci (ISSN 2413-4155). This special issue belongs to the section "Computer Sciences, Mathematics and AI".

Deadline for manuscript submissions: closed (31 January 2024) | Viewed by 10307

Special Issue Editors


E-Mail Website
Guest Editor
Engineering School (DEIM), University of Tuscia, Largo dell'Università, 01100 Viterbo, Italy
Interests: wavelets; fractals; fractional and stochastic equations; numerical and computational methods; mathematical physics; nonlinear systems; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computing and Mathematics, University of São Paulo, Ribeirão Preto 14040-901, Brazil
Interests: computational linguistics; artificial intelligence; interpretable machine learning; natural language processing

E-Mail Website1 Website2
Guest Editor
DISUCOM, University of Tuscia, via S. Maria in Gradi 4, 01100 Viterbo, Italy
Interests: symbolic artificial intelligence; knowledge representation; computational aspects of automated reasoning; semantic web; explainable AI

Special Issue Information

Dear Colleagues,

The field of Computational Linguistics (CL), in its engineering domain of natural language processing (NLP), has an important role to play in Artificial Intelligence (AI). AI, in turn, refers to what information about the human language structure is being transmitted to the machine. Built on the foundations of this relationship—from small prototypes and theoretical models to robust processing and learning systems applied to large corpora—is the exploration—alternative or joint—of logical relationships that generate meaning as well as of context-sensitive relationships. This is the key to the learning principles that make the interaction between CL and AI no longer nebulous and can endow computers with intelligence. The human linguistic process is simultaneously based on axiomatic (contextual) and logical resources. This is the universal structure of language, capable of elucidating aspects of the fundamental limitations of learning algorithms and helping technicians to design algorithms that circumvent these limitations. The ease that intelligent systems have in assimilating logical tasks, and their difficulty in learning to represent the context, show that a Computational Linguistics that overcomes architectural limitations needs to manage this double linguistic structure, as a process that is both axiomatic (contextual) and logical. This perspective makes it possible—for example, when determining the meaning of sentences—to transmit to the machine methods used in CL that cover logical and contextual aspects. This ability meets the definition of Artificial Intelligence, making systems more powerful and improving their efficiency in corpora annotation, evaluation of NLP systems, procedures in computational linguistics and NLP, engineering tasks to which these procedures are applied, and approaches to machine learning, among others. There is, not by chance, an overlap of domains that stems from this common structure and justifies the permeability between computer science, linguistics, psychology, philosophy, and mathematics, among other branches of science. This wide and dynamic spectrum of scientific fields makes this Special Issue a reference containing important results for various classes and tasks of the linguistic process—at the intersection between human and machine. Thus, it is expected that graduate students and researchers in their respective areas will address solutions to overcome the complexity of the language transmitted to Artificial Intelligence systems and to make the latter more intuitive.

Prof. Dr. Carlo Cattani
Dr. Dioneia Motta Monte-Serrat
Prof. Dr. Francesco M. Donini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sci is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computational linguistics
  • Artificial Intelligence
  • mathematics
  • linguistics
  • Natural Language Processing
  • learning systems
  • corpora annotation
  • machine learning
  • language
  • discourse analysis
  • parsing

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

17 pages, 2837 KiB  
Article
T5 for Hate Speech, Augmented Data, and Ensemble
by Tosin Adewumi, Sana Sabah Sabry, Nosheen Abid, Foteini Liwicki and Marcus Liwicki
Sci 2023, 5(4), 37; https://doi.org/10.3390/sci5040037 - 22 Sep 2023
Cited by 1 | Viewed by 1562
Abstract
We conduct relatively extensive investigations of automatic hate speech (HS) detection using different State-of-The-Art (SoTA) baselines across 11 subtasks spanning six different datasets. Our motivation is to determine which of the recent SoTA models is best for automatic hate speech detection and what [...] Read more.
We conduct relatively extensive investigations of automatic hate speech (HS) detection using different State-of-The-Art (SoTA) baselines across 11 subtasks spanning six different datasets. Our motivation is to determine which of the recent SoTA models is best for automatic hate speech detection and what advantage methods, such as data augmentation and ensemble, may have on the best model, if any. We carry out six cross-task investigations. We achieve new SoTA results on two subtasks—macro F1 scores of 91.73% and 53.21% for subtasks A and B of the HASOC 2020 dataset, surpassing previous SoTA scores of 51.52% and 26.52%, respectively. We achieve near-SoTA results on two others—macro F1 scores of 81.66% for subtask A of the OLID 2019 and 82.54% for subtask A of the HASOC 2021, in comparison to SoTA results of 82.9% and 83.05%, respectively. We perform error analysis and use two eXplainable Artificial Intelligence (XAI) algorithms (Integrated Gradient (IG) and SHapley Additive exPlanations (SHAP)) to reveal how two of the models (Bi-Directional Long Short-Term Memory Network (Bi-LSTM) and Text-to-Text-Transfer Transformer (T5)) make the predictions they do by using examples. Other contributions of this work are: (1) the introduction of a simple, novel mechanism for correcting Out-of-Class (OoC) predictions in T5, (2) a detailed description of the data augmentation methods, and (3) the revelation of the poor data annotations in the HASOC 2021 dataset by using several examples and XAI (buttressing the need for better quality control). We publicly release our model checkpoints and codes to foster transparency. Full article
(This article belongs to the Special Issue Computational Linguistics and Artificial Intelligence)
Show Figures

Figure 1

27 pages, 3650 KiB  
Article
Multi-Lexicon Classification and Valence-Based Sentiment Analysis as Features for Deep Neural Stock Price Prediction
by Shubashini Rathina Velu, Vinayakumar Ravi and Kayalvily Tabianan
Sci 2023, 5(1), 8; https://doi.org/10.3390/sci5010008 - 15 Feb 2023
Cited by 5 | Viewed by 2425
Abstract
The goal of the work is to enhance existing financial market forecasting frameworks by including an additional factor–in this example, a collection of carefully chosen tweets—into a long-short repetitive neural channel. In order to produce attributes for such a forecast, this research used [...] Read more.
The goal of the work is to enhance existing financial market forecasting frameworks by including an additional factor–in this example, a collection of carefully chosen tweets—into a long-short repetitive neural channel. In order to produce attributes for such a forecast, this research used a unique attitude analysis approach that combined psychological labelling and a valence rating that represented the strength of the sentiment. Both lexicons produced extra properties such 2-level polarization, 3-level polarization, gross reactivity, as well as total valence. The emotional polarity explicitly marked into the database contrasted well with outcomes of the innovative lexicon approach. Plotting the outcomes of each of these concepts against actual market rates of the equities examined has been the concluding step in this analysis. Root Mean Square Error (RMSE), preciseness, as well as Mean Absolute Percentage Error (MAPE) were used to evaluate the results. Across most instances of market forecasting, attaching an additional factor has been proven to reduce the RMSE and increase the precision of forecasts over lengthy sequences. Full article
(This article belongs to the Special Issue Computational Linguistics and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 1547 KiB  
Article
The Language Conceptual Formation to Inspire Intelligent Systems
by Dioneia Monte-Serrat and Carlo Cattani
Sci 2022, 4(4), 42; https://doi.org/10.3390/sci4040042 - 08 Nov 2022
Viewed by 1461
Abstract
The semantic web invests in systems that work collaboratively. In this article we show that the collaborative way is not enough, because the system must ‘understand’ the data resources that are provided to it, to organize them in the direction indicated by the [...] Read more.
The semantic web invests in systems that work collaboratively. In this article we show that the collaborative way is not enough, because the system must ‘understand’ the data resources that are provided to it, to organize them in the direction indicated by the system’s core, the algorithm. In order for intelligent systems to imitate human cognition, in addition to technical skills to model algorithms, we show that the specialist needs a good knowledge of the principles that explain how human language constructs concepts. The content of this article focuses on the principles of the conceptual formation of language, pointing to aspects related to the environment, to logical reasoning and to the recursive process. We used the strategy of superimposing the dynamics of human cognition and intelligent systems to open new frontiers regarding the formation of concepts by human cognition. The dynamic aspect of the recursion of the human linguistic process integrates visual, auditory, tactile input stimuli, among others, to the central nervous system, where meaning is constructed. We conclude that the human linguistic process involves axiomatic (contextual/biological) and logical principles, and that the dynamics of the relationship between them takes place through recursive structures, which guarantee the construction of meanings through long-range correlation under scale invariance. Recursion and cognition are, therefore, interdependent elements of the linguistic process, making it a set of sui generis structures that evidence that the essence of language, whether natural or artificial, is a form and not a substance. Full article
(This article belongs to the Special Issue Computational Linguistics and Artificial Intelligence)
Show Figures

Figure 1

Review

Jump to: Research

26 pages, 421 KiB  
Review
From Turing to Transformers: A Comprehensive Review and Tutorial on the Evolution and Applications of Generative Transformer Models
by Emma Yann Zhang, Adrian David Cheok, Zhigeng Pan, Jun Cai and Ying Yan
Sci 2023, 5(4), 46; https://doi.org/10.3390/sci5040046 - 15 Dec 2023
Viewed by 2636
Abstract
In recent years, generative transformers have become increasingly prevalent in the field of artificial intelligence, especially within the scope of natural language processing. This paper provides a comprehensive overview of these models, beginning with the foundational theories introduced by Alan Turing and extending [...] Read more.
In recent years, generative transformers have become increasingly prevalent in the field of artificial intelligence, especially within the scope of natural language processing. This paper provides a comprehensive overview of these models, beginning with the foundational theories introduced by Alan Turing and extending to contemporary generative transformer architectures. The manuscript serves as a review, historical account, and tutorial, aiming to offer a thorough understanding of the models’ importance, underlying principles, and wide-ranging applications. The tutorial section includes a practical guide for constructing a basic generative transformer model. Additionally, the paper addresses the challenges, ethical implications, and future directions in the study of generative models. Full article
(This article belongs to the Special Issue Computational Linguistics and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop