High-Performance Computing, Networking and Artificial Intelligence

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 September 2023) | Viewed by 1706

Special Issue Editor


E-Mail Website
Guest Editor
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
Interests: artificial intelligence; deep learning; high-performance computing; cloud computing

Special Issue Information

Dear Colleagues,

Alongside the rapid growth of computing and networking technology, the past decade has witnessed the proliferation of powerful parallel and distributed systems and an ever-increasing demand for high-performance computing, networking, and artificial intelligence (HPCNAI). HPCNAI underpins many of today's most exciting research areas, from big data analytics to machine vision, cloud computing to edge computing, image processing to natural language processing, and many others. This Special Issue aims to present novel techniques, empirical studies, and developments in this field of research.

Topics of interest for this Special Issue include, but are not limited to

  • High-performance computing theories and architectures;
  • AI-empowered high-performance computing trends and models;
  • Parallel and distributed system theories and architectures;
  • Web services and Internet computing;
  • Multi-Cloud, multi-access edge computing and intelligence;
  • Mobile computing and wireless communications;
  • Pervasive/ubiquitous computing and intelligence;
  • HPCNAI applications in distributed scheduling and resource management;
  • HPCNAI applications in anomaly detection and data analytics;
  • AI applications in image processing and pattern recognition;
  • AI applications in natural language processing;
  • Neural networks and deep learning.

Prof. Dr. Lei Wu
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • high-performance computing
  • parallel and distributed system
  • cloud computing
  • mobile computing
  • artificial intelligence
  • pattern recognition

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 2395 KiB  
Article
Improving Collaborative Filtering Recommendations with Tag and Time Integration in Virtual Online Communities
by Hyeon Jo, Jong-hyun Hong and Joon Yeon Choeh
Appl. Sci. 2023, 13(18), 10528; https://doi.org/10.3390/app131810528 - 21 Sep 2023
Viewed by 689
Abstract
In recent years, virtual online communities have experienced rapid growth. These communities enable individuals to share and manage images or websites by employing tags. A collaborative tagging system (CTS) facilitates the process by which internet users collectively organize resources. CTS offers a plethora [...] Read more.
In recent years, virtual online communities have experienced rapid growth. These communities enable individuals to share and manage images or websites by employing tags. A collaborative tagging system (CTS) facilitates the process by which internet users collectively organize resources. CTS offers a plethora of useful information, including tags and timestamps, which can be utilized for recommendations. A tag represents an implicit evaluation of the user’s preference for a particular resource, while timestamps indicate changes in the user’s interests over time. As the amount of information increases, it is feasible to integrate more detailed data, such as tags and timestamps, to improve the quality of personalized recommendations. The current study employs collaborative filtering (CF), which incorporates both tag and time information to enhance recommendation precision. A computational recommender system is established to generate weights and calculate similarities by incorporating tag data and time. The effectiveness of our recommendation model was evaluated by linearly merging tag and time data. In addition, the proposed CF method was validated by applying it to big data sets in the real world. To assess its performance, the size of the neighborhood was adjusted in accordance with the standard CF procedure. The experimental results indicate that our proposed method significantly improves the quality of recommendations compared to the basic CF approach. Full article
(This article belongs to the Special Issue High-Performance Computing, Networking and Artificial Intelligence)
Show Figures

Figure 1

17 pages, 986 KiB  
Article
Grouped Contrastive Learning of Self-Supervised Sentence Representation
by Qian Wang, Weiqi Zhang, Tianyi Lei and Dezhong Peng
Appl. Sci. 2023, 13(17), 9873; https://doi.org/10.3390/app13179873 - 31 Aug 2023
Cited by 1 | Viewed by 629
Abstract
This paper proposes a method called Grouped Contrastive Learning of self-supervised Sentence Representation (GCLSR), which can learn an effective and meaningful representation of sentences. Previous works maximize the similarity between two vectors to be the objective of contrastive learning, suffering from the high-dimensionality [...] Read more.
This paper proposes a method called Grouped Contrastive Learning of self-supervised Sentence Representation (GCLSR), which can learn an effective and meaningful representation of sentences. Previous works maximize the similarity between two vectors to be the objective of contrastive learning, suffering from the high-dimensionality of the vectors. In addition, most previous works have adopted discrete data augmentation to obtain positive samples and have directly employed a contrastive framework from computer vision to perform contrastive training, which could hamper contrastive training because text data are discrete and sparse compared with image data. To solve these issues, we design a novel framework of contrastive learning, i.e., GCLSR, which divides the high-dimensional feature vector into several groups and respectively computes the groups’ contrastive losses to make use of more local information, eventually obtaining a more fine-grained sentence representation. In addition, in GCLSR, we design a new self-attention mechanism and both a continuous and a partial-word vector augmentation (PWVA). For the discrete and sparse text data, the use of self-attention could help the model focus on the informative words by measuring the importance of every word in a sentence. By using the PWVA, GCLSR can obtain high-quality positive samples used for contrastive learning. Experimental results demonstrate that our proposed GCLSR achieves an encouraging result on the challenging datasets of the semantic textual similarity (STS) task and transfer task. Full article
(This article belongs to the Special Issue High-Performance Computing, Networking and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop