Advancements in Natural Language Processing, Semantic Networks, and Sentiment Analysis

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 31 July 2024 | Viewed by 1552

Special Issue Editors


E-Mail Website
Guest Editor
Information Technologies Group - atlanTTic, University of Vigo, 36310 Vigo, Spain
Interests: artificial intelligence; computational linguistics; machine learning; natural language processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Telematics Engineering, University of Vigo, 36310 Vigo, Spain
Interests: artificial intelligence; natural language processing; P2P networks; recommender systems; personal devices and mobile services

E-Mail Website
Guest Editor
Information Technologies Group, atlanTTic, University of Vigo, 36310 Vigo, Spain
Interests: artificial intelligence; natural language processing; computing systems design; real-time systems; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The recent advancements in deep learning models and the availability of multi-modal data online have motivated the necessity to develop new natural language processing techniques. Pre-trained language models and large language models constitute representative examples. Accordingly, this Special Issue on "Advancements in Natural Language Processing, Semantic Networks, and Sentiment Analysis" welcomes contributions to these advanced techniques with particular attention to the management of semantic knowledge (e.g., sentiment analysis and emotion detection applications) in multidisciplinary-use cases of artificial intelligence (e.g., smart health services). It provides an opportunity to advance the generative artificial intelligence literature for academia, the industry, and the general public. Thus, the call is open for theoretical and practical applications of research trends to inspire innovation in this field. Recommended topics include, but are not limited to, the following: advanced sentiment analysis and emotion detection techniques, applications of generative artificial intelligence (e.g., pre-trained language models and large language models), machine learning models in batch and streaming operations, the study of semantic knowledge management and representation (e.g., semantic networks), etc.

Dr. Silvia García-Méndez
Dr. Enrique Costa-Montenegro
Dr. Francisco De Arriba-Pérez
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • emotion detection
  • large language models
  • machine learning
  • natural language processing
  • pre-trained language models
  • semantics and pragmatics
  • sentiment analysis

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 996 KiB  
Article
REACT: Relation Extraction Method Based on Entity Attention Network and Cascade Binary Tagging Framework
by Lingqi Kong and Shengquau Liu
Appl. Sci. 2024, 14(7), 2981; https://doi.org/10.3390/app14072981 - 02 Apr 2024
Viewed by 366
Abstract
With the development of the Internet, vast amounts of text information are being generated constantly. Methods for extracting the valuable parts from this information have become an important research field. Relation extraction aims to identify entities and the relations between them from text, [...] Read more.
With the development of the Internet, vast amounts of text information are being generated constantly. Methods for extracting the valuable parts from this information have become an important research field. Relation extraction aims to identify entities and the relations between them from text, helping computers better understand textual information. Currently, the field of relation extraction faces various challenges, particularly in addressing the relation overlapping problem. The main difficulties are as follows: (1) Traditional methods of relation extraction have limitations and lack the ability to handle the relation overlapping problem, requiring a redesign. (2) Relation extraction models are easily disturbed by noise from words with weak relevance to the relation extraction task, leading to difficulties in correctly identifying entities and their relations. In this paper, we propose the Relation extraction method based on the Entity Attention network and Cascade binary Tagging framework (REACT). We decompose the relation extraction task into two subtasks: head entity identification and tail entity and relation identification. REACT first identifies the head entity and then identifies all possible tail entities that can be paired with the head entity, as well as all possible relations. With this architecture, the model can handle the relation overlapping problem. In order to reduce the interference of words in the text that are not related to the head entity or relation extraction task and improve the accuracy of identifying the tail entities and relations, we designed an entity attention network. To demonstrate the effectiveness of REACT, we construct a high-quality Chinese dataset and conduct a large number of experiments on this dataset. The experimental results fully confirm the effectiveness of REACT, showing its significant advantages in handling the relation overlapping problem compared to current other methods. Full article
Show Figures

Figure 1

15 pages, 557 KiB  
Article
Prefix Data Augmentation for Contrastive Learning of Unsupervised Sentence Embedding
by Chunchun Wang and Shu Lv
Appl. Sci. 2024, 14(7), 2880; https://doi.org/10.3390/app14072880 - 29 Mar 2024
Viewed by 396
Abstract
This paper presents prefix data augmentation (Prd) as an innovative method for enhancing sentence embedding learning through unsupervised contrastive learning. The framework, dubbed PrdSimCSE, uses Prd to create both positive and negative sample pairs. By appending positive and negative prefixes to a sentence, [...] Read more.
This paper presents prefix data augmentation (Prd) as an innovative method for enhancing sentence embedding learning through unsupervised contrastive learning. The framework, dubbed PrdSimCSE, uses Prd to create both positive and negative sample pairs. By appending positive and negative prefixes to a sentence, the basis for contrastive learning is formed, outperforming the baseline unsupervised SimCSE. PrdSimCSE is positioned within a probabilistic framework that expands the semantic similarity event space and generates superior negative samples, contributing to more accurate semantic similarity estimations. The model’s efficacy is validated on standard semantic similarity tasks, showing a notable improvement over that of existing unsupervised models, specifically a 1.08% enhancement in performance on BERTbase. Through detailed experiments, the effectiveness of positive and negative prefixes in data augmentation and their impact on the learning model are explored, and the broader implications of prefix data augmentation are discussed for unsupervised sentence embedding learning. Full article
Show Figures

Figure 1

16 pages, 2169 KiB  
Article
Causal Reinforcement Learning for Knowledge Graph Reasoning
by Dezhi Li, Yunjun Lu, Jianping Wu, Wenlu Zhou and Guangjun Zeng
Appl. Sci. 2024, 14(6), 2498; https://doi.org/10.3390/app14062498 - 15 Mar 2024
Viewed by 529
Abstract
Knowledge graph reasoning can deduce new facts and relationships, which is an important research direction of knowledge graphs. Most of the existing methods are based on end-to-end reasoning which cannot effectively use the knowledge graph, so consequently the performance of the method still [...] Read more.
Knowledge graph reasoning can deduce new facts and relationships, which is an important research direction of knowledge graphs. Most of the existing methods are based on end-to-end reasoning which cannot effectively use the knowledge graph, so consequently the performance of the method still needs to be improved. Therefore, we combine causal inference with reinforcement learning and propose a new framework for knowledge graph reasoning. By combining the counterfactual method in causal inference, our method can obtain more information as prior knowledge and integrate it into the control strategy in the reinforcement model. The proposed method mainly includes the steps of relationship importance identification, reinforcement learning framework design, policy network design, and the training and testing of the causal reinforcement learning model. Specifically, a prior knowledge table is first constructed to indicate which relationship is more important for the problem to be queried; secondly, designing state space, optimization, action space, state transition and reward, respectively, is undertaken; then, the standard value is set and compared with the weight value of each candidate edge, and an action strategy is selected according to the comparison result through prior knowledge or neural network; finally, the parameters of the reinforcement learning model are determined through training and testing. We used four datasets to compare our method to the baseline method and conducted ablation experiments. On dataset NELL-995 and FB15k-237, the experimental results show that the MAP scores of our method are 87.8 and 45.2, and the optimal performance is achieved. Full article
Show Figures

Figure 1

Back to TopTop