Mathematical Modeling, Machine Learning, and Intelligent Computing for Internet of Things

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 2070

Special Issue Editor


E-Mail Website
Guest Editor
School of Computer Science and Engineering, Beihang University, Beijing 100191, China
Interests: computer vision; pattern recognition; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The advent of Internet of Things (IoT) has created a demand for advanced technologies in mathematical modeling, machine learning, and intelligent computing to address complex and challenging issues. These technologies play a pivotal role in understanding and processing the diverse data types generated by increasingly sophisticated sensors, encompassing tasks such as target recognition, segmentation, depth estimation, object tracking, and semantic understanding. Additionally, they extend to methods of interconnecting objects, including blockchain technology. Originally developed for artificial datasets and abstract tasks, these technologies have now found practical applications in solving real-world problems. Consequently, this Special Issue is dedicated to showcasing the latest theoretical and practical research in mathematical modeling, machine learning, and intelligent computing within the context of the IoT.

Prof. Dr. Hao Sheng
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Internet of Things
  • deep learning
  • natural language processing
  • computer vision
  • mathematical modeling
  • intelligent computing
  • blockchain

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 2519 KiB  
Article
KDTM: Multi-Stage Knowledge Distillation Transfer Model for Long-Tailed DGA Detection
by Baoyu Fan, Han Ma, Yue Liu, Xiaochen Yuan and Wei Ke
Mathematics 2024, 12(5), 626; https://doi.org/10.3390/math12050626 - 20 Feb 2024
Viewed by 560
Abstract
As the most commonly used attack strategy by Botnets, the Domain Generation Algorithm (DGA) has strong invisibility and variability. Using deep learning models to detect different families of DGA domain names can improve the network defense ability against hackers. However, this task faces [...] Read more.
As the most commonly used attack strategy by Botnets, the Domain Generation Algorithm (DGA) has strong invisibility and variability. Using deep learning models to detect different families of DGA domain names can improve the network defense ability against hackers. However, this task faces an extremely imbalanced sample size among different DGA categories, which leads to low classification accuracy for small sample categories and even classification failure for some categories. To address this issue, we introduce the long-tailed concept and augment the data of small sample categories by transferring pre-trained knowledge. Firstly, we propose the Data Balanced Review Method (DBRM) to reduce the sample size difference between the categories, thus a relatively balanced dataset for transfer learning is generated. Secondly, we propose the Knowledge Transfer Model (KTM) to enhance the knowledge of the small sample categories. KTM uses a multi-stage transfer to transfer weights from the big sample categories to the small sample categories. Furthermore, we propose the Knowledge Distillation Transfer Model (KDTM) to relieve the catastrophic forgetting problem caused by transfer learning, which adds knowledge distillation loss based on the KTM. The experimental results show that KDTM can significantly improve the classification performance of all categories, especially the small sample categories. It can achieve a state-of-the-art macro average F1 score of 84.5%. The robustness of the KDTM model is verified using three DGA datasets that follow the Pareto distributions. Full article
Show Figures

Figure 1

16 pages, 978 KiB  
Article
VL-Meta: Vision-Language Models for Multimodal Meta-Learning
by Han Ma, Baoyu Fan, Benjamin K. Ng and Chan-Tong Lam
Mathematics 2024, 12(2), 286; https://doi.org/10.3390/math12020286 - 16 Jan 2024
Cited by 1 | Viewed by 1247
Abstract
Multimodal learning is a promising area in artificial intelligence (AI) that can make the model understand different kinds of data. Existing works are trying to re-train a new model based on pre-trained models that requires much data, computation power, and time. However, it [...] Read more.
Multimodal learning is a promising area in artificial intelligence (AI) that can make the model understand different kinds of data. Existing works are trying to re-train a new model based on pre-trained models that requires much data, computation power, and time. However, it is difficult to achieve in low-resource or small-sample situations. Therefore, we propose VL-Meta, Vision Language Models for Multimodal Meta Learning. It (1) presents the vision-language mapper and multimodal fusion mapper, which are light model structures, to use the existing pre-trained models to make models understand images to language feature space and save training data, computation power, and time; (2) constructs the meta-task pool that can only use a small amount of data to construct enough training data and improve the generalization of the model to learn the data knowledge and task knowledge; (3) proposes the token-level training that can align inputs with the outputs during training to improve the model performance; and (4) adopts the multi-task fusion loss to learn the different abilities for the models. It achieves a good performance on the Visual Question Answering (VQA) task, which shows the feasibility and effectiveness of the model. This solution can help blind or visually impaired individuals obtain visual information. Full article
Show Figures

Figure 1

Back to TopTop