Future Trends in Intelligent Edge Computing and Networking

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (31 January 2024) | Viewed by 2507

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science, Northwestern Polytechnical University, Xi’an 710129, China
Interests: MEC; AI; RIS
Special Issues, Collections and Topics in MDPI journals
School of Cyber Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
Interests: MEC; federated learning; blockchain
Special Issues, Collections and Topics in MDPI journals
School of Cyber Engineering, Xidian University, Xi’an 710126, China
Interests: MAC; RIS; SAGIN

Special Issue Information

Dear Colleagues,

Recently, mobile/multi-access edge computing (MEC) has come to play a dominant role in supporting various computational applications. To improve MEC performance in dynamical wireless environments, robust computational offloading decisions making methods and efficient wireless networking techniques are considered significant. These mostly serve as the ‘brain’ of MEC systems and act mostly as communication channels connecting users to the edge.

As we evolve toward the artificial intelligence of things (AIoT), our future mobile networks must support a much wider range of AI-enabled applications, such as virtual reality (VR), autonomous driving and augmented reality (AR). As a result, these and the many other new requirements call for a new computing paradigm—intelligent edge computing/edge intelligence, which has emerged as the next frontier and a cornerstone for future intelligent networks. In this context, there still exist opportunities in applying advanced machine learning technologies to facilitate the integration of computing and communications design, thereby realizing and implementing future emerging services both easily and economically.

Prof. Dr. Bo Yang
Dr. Yueyue Dai
Dr. Xuelin Cao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • mobile/multi-access edge computing
  • AI
  • computation offloading
  • future intelligent networks

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 495 KiB  
Article
A Multi-View Interactive Approach for Multimodal Sarcasm Detection in Social Internet of Things with Knowledge Enhancement
by Hao Liu, Bo Yang and Zhiwen Yu
Appl. Sci. 2024, 14(5), 2146; https://doi.org/10.3390/app14052146 - 04 Mar 2024
Cited by 1 | Viewed by 592
Abstract
Multimodal sarcasm detection is a developing research field in social Internet of Things, which is the foundation of artificial intelligence and human psychology research. Sarcastic comments issued on social media often imply people’s real attitudes toward the events they are commenting on, reflecting [...] Read more.
Multimodal sarcasm detection is a developing research field in social Internet of Things, which is the foundation of artificial intelligence and human psychology research. Sarcastic comments issued on social media often imply people’s real attitudes toward the events they are commenting on, reflecting their current emotional and psychological state. Additionally, the limited memory of Internet of Things mobile devices has posed challenges in deploying sarcastic detection models. An abundance of parameters also leads to an increase in the model’s inference time. Social networking platforms such as Twitter and WeChat have generated a large amount of multimodal data. Compared to unimodal data, multimodal data can provide more comprehensive information. Therefore, when studying sarcasm detection on social Internet of Things, it is necessary to simultaneously consider the inter-modal interaction and the number of model parameters. In this paper, we propose a lightweight multimodal interaction model with knowledge enhancement based on deep learning. By integrating visual commonsense knowledge into the sarcasm detection model, we can enrich the semantic information of image and text modal representation. Additionally, we develop a multi-view interaction method to facilitate the interaction between modalities from different modal perspectives. The experimental results indicate that the model proposed in this paper outperforms the unimodal baselines. Compared to multimodal baselines, it also has similar performance with a small number of parameters. Full article
(This article belongs to the Special Issue Future Trends in Intelligent Edge Computing and Networking)
Show Figures

Figure 1

15 pages, 3552 KiB  
Article
Improved YOLOv7 Algorithm for Small Object Detection in Unmanned Aerial Vehicle Image Scenarios
by Xinmin Li, Yingkun Wei, Jiahui Li, Wenwen Duan, Xiaoqiang Zhang and Yi Huang
Appl. Sci. 2024, 14(4), 1664; https://doi.org/10.3390/app14041664 - 19 Feb 2024
Cited by 1 | Viewed by 1022
Abstract
Object detection in unmanned aerial vehicle (UAV) images has become a popular research topic in recent years. However, UAV images are captured from high altitudes with a large proportion of small objects and dense object regions, posing a significant challenge to small object [...] Read more.
Object detection in unmanned aerial vehicle (UAV) images has become a popular research topic in recent years. However, UAV images are captured from high altitudes with a large proportion of small objects and dense object regions, posing a significant challenge to small object detection. To solve this issue, we propose an efficient YOLOv7-UAV algorithm in which a low-level prediction head (P2) is added to detect small objects from the shallow feature map, and a deep-level prediction head (P5) is removed to reduce the effect of excessive down-sampling. Furthermore, we modify the bidirectional feature pyramid network (BiFPN) structure with a weighted cross-level connection to enhance the fusion effectiveness of multi-scale feature maps in UAV images. To mitigate the mismatch between the prediction box and ground-truth box, the SCYLLA-IoU (SIoU) function is employed in the regression loss to accelerate the training convergence process. Moreover, the proposed YOLOv7-UAV algorithm has been quantified and compiled in the Vitis-AI development environment and validated in terms of power consumption and hardware resources on the FPGA platform. The experiments show that the resource consumption of YOLOv7-UAV is reduced by 28%, the mAP is improved by 3.9% compared to YOLOv7, and the FPGA implementation improves the energy efficiency by 12 times compared to the GPU. Full article
(This article belongs to the Special Issue Future Trends in Intelligent Edge Computing and Networking)
Show Figures

Figure 1

Back to TopTop