New Challenges in Machine Learning and Natural Language Processing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 July 2023) | Viewed by 982

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Engineering and Sciences, Universidad Adolfo Ibañez (UAI), Santiago, Chile
Interests: natural-language processing; text analytics; machine learning; artificial intelligence

Special Issue Information

Dear Colleagues,

Natural Language Processing (NLP) has become a key technology of the information society. In recent years, research in NLP has mushroomed and many practical NLP applications have been brought to market including information retrieval systems and products for machine translation. Furthermore, recent large language models (LLM) such as GPT and BERT using Transformers have pushed the state-of-the-art on human-language interaction and large-scale document processing, However, it brings up numerous relevant unsolved theoretical and technological problems that await further research.

The Special Issue on new challenges in machine learning and NLP aims at assembling contributions by researchers from various fields of human language technology presenting results from the forefront of research and discussing the advanced state of the technology and its impact on our life.

Prof. Dr. John Atkinson Abutridy
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computational phonology/morphology, word segmentation, and POS tagging
  • syntax, semantics, grammar, and the lexicon
  • lexical semantics and ontologies
  • document summarization and abstracting
  • natural language generation and neural models
  • novel large language models
  • paraphrasing and textual entailment
  • parsing and chunking
  • dialogue models
  • speech processing
  • industrial-strength applications of NLP
  • novel machine learning methods and architectures for NLP
  • novel deep learning architectures for NLP tasks
  • spoken language analysis and generation, and speech-to-speech translation
  • computational pragmatics
  • human dialogue system and conversational agents
  • computational models of discourse
  • question answering
  • word sense disambiguation
  • information extraction and text mining
  • semantic role labeling
  • sentiment analysis and opinion mining
  • machine translation and translation aids
  • multilingual NLP
  • machine learning for text mining
  • corpus development and language resources
  • evaluation methods and user studies
  • explainable and interpretable NLP
  • combining symbolic and neural approaches to NLP

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 2582 KiB  
Article
An Improved Chinese Pause Fillers Prediction Module Based on RoBERTa
by Ling Yu, Xiaoqun Zhou and Fanglin Niu
Appl. Sci. 2023, 13(19), 10652; https://doi.org/10.3390/app131910652 - 25 Sep 2023
Viewed by 540
Abstract
The prediction of pause fillers plays a crucial role in enhancing the naturalness of synthesized speech. In recent years, neural networks, including LSTM, BERT, and XLNet, have been employed for pause fillers prediction modules. However, these methods have exhibited relatively lower accuracy in [...] Read more.
The prediction of pause fillers plays a crucial role in enhancing the naturalness of synthesized speech. In recent years, neural networks, including LSTM, BERT, and XLNet, have been employed for pause fillers prediction modules. However, these methods have exhibited relatively lower accuracy in predicting pause fillers. This paper introduces the utilization of the RoBERTa model for predicting Chinese pause fillers and presents a novel approach to training the RoBERTa model, effectively enhancing the accuracy of Chinese pause fillers prediction. Our proposed approach involves categorizing text from different speakers into four distinct style groups based on the frequency and position of Chinese pause fillers. The RoBERTa model is trained on these four groups of data, which incorporate different styles of fillers, thereby ensuring a more natural synthesis of speech. The Chinese pause fillers prediction module is evaluated on systems such as Parallel Tacotron2, FastPitch, and Deep Voice3, achieving a notable 26.7% improvement in word-level prediction accuracy compared to the BERT model, along with a 14% enhancement in position-level prediction accuracy. This substantial improvement results in a significant enhancement of the naturalness of the generated speech. Full article
(This article belongs to the Special Issue New Challenges in Machine Learning and Natural Language Processing)
Show Figures

Figure 1

Back to TopTop