Next Article in Journal
Design of a Multi-Position Alignment Scheme
Previous Article in Journal
RU-SLAM: A Robust Deep-Learning Visual Simultaneous Localization and Mapping (SLAM) System for Weakly Textured Underwater Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on a Cross-Domain Few-Shot Adaptive Classification Algorithm Based on Knowledge Distillation Technology

1
Hubei Provincial Engineering Technology Research Center of Green Chemical Equipment, School of Mechanical and Electrical Engineering, Wuhan Institute of Technology, Wuhan 430205, China
2
School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430205, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(6), 1939; https://doi.org/10.3390/s24061939
Submission received: 26 January 2024 / Revised: 10 March 2024 / Accepted: 16 March 2024 / Published: 18 March 2024
(This article belongs to the Section Intelligent Sensors)

Abstract

:
With the development of deep learning and sensors and sensor collection methods, computer vision inspection technology has developed rapidly. The deep-learning-based classification algorithm requires the acquisition of a model with superior generalization capabilities through the utilization of a substantial quantity of training samples. However, due to issues such as privacy, annotation costs, and sensor-captured images, how to make full use of limited samples has become a major challenge for practical training and deployment. Furthermore, when simulating models and transferring them to actual image scenarios, discrepancies often arise between the common training sets and the target domain (domain offset). Currently, meta-learning offers a promising solution for few-shot learning problems. However, the quantity of supporting set data on the target domain remains limited, leading to limited cross-domain learning effectiveness. To address this challenge, we have developed a self-distillation and mixing (SDM) method utilizing a Teacher–Student framework. This method effectively transfers knowledge from the source domain to the target domain by applying self-distillation techniques and mixed data augmentation, learning better image representations from relatively abundant datasets, and achieving fine-tuning in the target domain. In comparison with nine classical models, the experimental results demonstrate that the SDM method excels in terms of training time and accuracy. Furthermore, SDM effectively transfers knowledge from the source domain to the target domain, even with a limited number of target domain samples.

1. Introduction

At present, equipment can obtain a large amount of image data by being equipped with various sensors, and the data collected by the sensors can be used to train recognition and detection models with excellent generalization ability by means of methods such as deep learning [1]. However, in real-world scenarios, obtaining a sufficient quantity of high-quality labeled samples can be challenging due to privacy concerns, labeling costs, and other factors [2]. This often leads to the emergence of the few-shot problem. There are two primary reasons for the emergence of this issue: (1) the availability of data is adequate, but the amount of labeled data is limited, and (2) the dataset itself has a limited amount of data. In practical tests, the second reason has been found to be the primary contributor [3,4]. The few-shot problem can cause significant overfitting in deep learning models during training. Therefore, how to train a model using only a small amount of labeled data while still maintaining good generalization ability has become a critical research focus.
Few-shot learning algorithms primarily rely on data enhancement, model optimization, and transfer learning techniques [5,6]. Among these methods, transfer learning stands out as a leading research direction. This approach takes advantage of the relevance between the predicted target and the source model trained on related tasks with a large amount of data. It then applies the knowledge learned from a small number of samples to a new field. Since most few-shot tasks provide algorithms that transfer knowledge from a large collection of source datasets to a sparsely annotated collection of target categories, this method essentially falls under the category of meta-learning [7]. Therefore, this paper focuses primarily on the transfer learning algorithm based on meta-learning as its main research content.
In deep learning tasks, the training and test data are typically derived from the same dataset. However, in practical applications, the source and target data may be sampled separately, or the training and test environments may vary, leading to the cross-domain problem [8]. The cross-domain problem refers to the difference between the source domain and target domain in terms of feature space, category space, or edge distribution, which affects the model’s generalization performance in the target domain. Cross-domain problems can be categorized as domain adaptation (DA) and domain generalization (DG) problems. The primary difference between DA and DG lies in the fact that DA mainly addresses scenarios where the category spaces of the source and target domains are the same, but the edge distribution differs. Conversely, DG is designed for cases where the category spaces of the source and target domains are distinct, or there is no overlap whatsoever. In terms of complexity, DG is generally more challenging than DA. To solve the few-shot problem, it is necessary to appropriately handle the contradiction between limited data in a specific target domain and rich source domain data. Currently, two main solutions are used [9]: (1) based on supervised domain adaptation learning (SDA) methods, following the framework of domain adaptation learning, while incorporating innovative design elements to adapt to the processing requirements of few-shot datasets. (2) Based on few-shot learning techniques, enhancing the model architecture, and combining with auxiliary learning tasks (data alignment and meta-learning, etc.), to enhance the domain adaptation ability of the model.
The early SDA methods used for classification tasks relied on matrix mapping and linear classifiers, such as SHFR-ECOC and sparse feature transformation [10]. However, with the rapid development of deep learning, the emergence of end-to-end classification models has greatly simplified the complexity of feature transformation. For instance, the CCSA model [11] trains a twin network architecture through joint supervision of category entropy (i.e., softmax loss) and point-wise contrast loss, leading to fast convergence and improved classification accuracy. Another model, FADA [12], uses adversarial training to achieve feature alignment, enhancing the discriminatory power of discriminators in the network with well-designed four paired data. Another SDA method [13] uses random neighborhood embedding for domain adaptation, focusing on minimizing the maximum distance between source and target domains to achieve classification between image categories. However, the training of the aforementioned SDA methods requires a lot of time to pair samples between Ds and Dt. Although the CTL method [14] can be trained through single-stream data based on the small-batch strategy, it still needs to find a new learning paradigm to replace the tedious sample pairing work and simplify the operation process of data flow. Huang et al. [15] introduce the Aligning Distillation (ALDI) framework, which aligns students with teachers by comparing their recommendation characteristics. This alignment is achieved through tailored rating distribution alignment, ranking alignment, and identification of alignment losses, effectively narrowing the disparities between the two. Furthermore, ALDI incorporates a teacher-qualifying weighting structure to safeguard students from acquiring inaccurate information from unreliable teachers. Experimental results demonstrate that ALDI surpasses state-of-the-art baselines.
Due to the cross-domain and few-shot nature of the problem, the number of samples in the source and target domains is limited, and there are significant distribution differences between them. To address these challenges, this paper proposes a novel self-distillation mixed model (SDM). To address the issue of small sample size, this paper employs a Teacher–Student architecture and MixUp technology to perform self-distillation learning after dataset augmentation. To tackle the cross-domain problem, transfer learning is applied to learn better image representations from a relatively abundant dataset and fine-tune it in the target domain. Given the significant achievements of MixUp data augmentation technology in image classification, we use MixUp to fine-tune the model in the target domain. By mixing images between different categories, we expand the limited training dataset in the target domain and reduce the model’s memory of noisy samples to mitigate their impact on the model. In this way, the model can learn the domain information of the target domain using only a small amount of labeled data, achieving better transfer effects. Since training methods and backbones can affect the convergence speed of the model, parallel training with multiple GPUs is adopted in the experiment to increase the batch size. At the same time, ResNet is used as the backbone network of the model to improve training speed and convergence speed. Experimental results demonstrate the efficacy of both the Teacher–Student architecture and optimized MixUp technology in enhancing SDM’s classification and migration capabilities. SDM not only fully utilizes the limited labeled data to capture target domain information but also mitigates the risk of model overfitting. When compared to current mainstream models, SDM achieves the highest average classification accuracy.

2. Model Principle

The structure of the supervised domain-adaptive model based on self-distillation and mixing is shown in Figure 1. The primary objective of this model is to address the disparities in data distribution between the target and source domains, ultimately aiming to develop a model that demonstrates excellent generalizability within the target domain.
Model training is mainly divided into two stages: pre-training on the source domain and fine-tuning on the target domain.
(1)
Pre-training stage
To enhance the learning ability of the model on the source domain data, we utilized labeled data and achieved pre-training of the SDM model (Figure 2). SDM employed a data enhancement technique that encompassed both weak and strong enhancement operations for the images in the source domain. For weak enhancements, SDM utilized methods like random cropping and flipping to improve the model’s ability to identify objects at various positions and angles. Furthermore, SDM employed a SimCLR-like strategy for strong enhancement operations, including techniques such as random color changes and Gaussian blurring, to enhance the model’s robustness to variations in illumination, color, and texture. During training, SDM fed the weakly enhanced images into the Teacher model and the strongly enhanced images into the Student model to obtain their intermediate feature and outcome vectors. Subsequently, SDM optimized the model parameters by calculating the classification loss of the Student model and the alignment loss of the Teacher with the Student model. By adjusting the variable parameters, SDM was able to flexibly control the weights of the two losses to balance classification accuracy and model consistency.
In contrast to the previous self-distillation method that employed pseudo-twin networks for self-distillation using two identical models, SDM utilized the Teacher–Student structure for supervised self-distillation training. This approach encompassed two distinct components: supervised classification training and aligned distillation training. Both the Teacher and Student models employed an identical model based on the convolutional section of the ResNet50 architecture, which had been pre-trained on ImageNet. Additionally, two fully connected layers were appended to each model. During training, the Teacher received slightly enhanced images as input, whereas the Student received reinforced images. The Teacher model was updated using a gradient and an exponential moving average (EMA) [16]. This approach can be formally expressed as follows [17]:
W Teacher t = ( 1 β ) W Student t + β W Teacher t 1   β = 0.9
To begin, the output of the model lacks reliability due to the initialization of the fully connected layer. Therefore, during the initial number of training rounds, SDM solely employed the cross-entropy loss function between the output results of the Student model and the corresponding labels. However, as training progressed, SDM expanded the loss function to encompass both the cross-entropy loss and the cosine loss between the output features of the Student model and the Teacher model. This expanded loss function can be formulated as follows:
L = CE ( y ^ Student   ,   y ) ,   t n u m CE ( y ^ Student   ,   y ) + α t ( 1 c o s ( x ^ Student   ,   x ^ Teacher ) ) ,   t > n u m
where the parameters of the coordination loss are related to the training rounds, which can be expressed as:
α t = α T × t T
To address the issue of random domain drift between the strongly enhanced images and the original domain images, SDM employed the cosine alignment loss to ensure that the model maps images with strong and weak enhancements into similar spaces. This adaptation allows the model to adapt to different domains, resulting in improved generalization performance. Additionally, using an updated Teacher model through exponential moving average provides stability and resistance to noise interference, leading to more stable parameter learning. Therefore, SDM opted to utilize the Teacher model obtained from source domain training to ensure both model stability and accuracy.
(2)
Fine-tuning stage
To further enhance the labeled images in the target domain, we fine-tuned the SDM using the pre-trained Teacher model. This process involves an image hybridization operation, as depicted in Figure 3. To create hybrid images, SDM randomly selects two images from the target domain and combines them using parameters from the Beta distribution. These hybrids are then used alongside their corresponding labels to compute cross-entropy losses. These losses are then utilized to update the model. This hybrid approach is designed to enhance the model’s ability to capture specific feature representations of the target domain, thereby improving its performance within that domain.
For two randomly selected labeled pictures in the target domain, mix the pictures and labels in a certain proportion, which can be expressed as:
x ~ = λ x i + ( 1 λ ) x j y ~ = λ y i + ( 1 λ ) y j
where the mixture parameter is randomly generated, conforming to the Beta distribution,
f ( x ; a , b ) = Γ ( a + b ) Γ ( a ) Γ ( b ) x a 1 ( 1 x ) b 1
To ensure that the model can effectively identify “unseen” target domain data, it is crucial for SDM to balance its learning difficulty. However, using MixUp fully can make the model difficult to learn. Therefore, SDM has devised a random MixUp strategy to address this issue. This strategy can be summarized as follows:
L = CE ( model ( x i )   ,   y i ) ,   70 % λ CE ( model ( x ~ )   ,   y i ) + ( 1 λ ) CE ( model ( x ~ )   ,   y j ) ,   30 %
The MixUp pseudocode is as follows:
#y1, y2 should be one-hot vectors
For (x1, y1), (x2, y2) in zip (loader1, loader2):
Lam = numpy.random.beta(alpha, alpha)
x = Variable (lam × x1 + (l. − lam) × x2)
y = Variable(lam × yl + (l. − am) × y2)
optimizer. zero_grad()
Loss (net(x), y). backward ()
optimizer. step()
Self-distillation, as an important learning technique, allows the model to learn from itself, improving prediction accuracy by enhancing the model’s generalization ability and reducing the occurrence of overfitting. This method has been effectively proven in various research studies [17,18,19]. On the other hand, mixed data augmentation effectively enriches the diversity of training data by combining non-traditional input and target example transformations. This increased diversity is crucial for building more resilient and generalized models. The integration of these two strategies into SDM methods is expected to form a more robust modeling solution and improve the model’s ability to defend against adversarial attacks.
In summary, the domain-adaptive model, which is based on self-distillation and mixing, undergoes both pre-training and fine-tuning. SDM incorporates enhanced data augmentation and image-mixing strategies to enhance its generalization and adaptability to the target domain.

3. Experiments

3.1. Experimental Setup

(1)
Experimental environment setting
The paper mentions using Python2.7 as the software environment, with a hardware environment of two NVIDIA2080Ti GPUs (NVIDIA, Santa Clara, CA, USA). The distribution function of the torch library is used for parallel training, so that more images can be computed in one iteration, which is helpful for the convergence of the model.
The operating system platform was Ubuntu20.04 with support for the SFAN (Serverless Auto-Scale) function. Ubuntu SFAN is used for parallel training, which is only used to facilitate validation on larger datasets with more categories.
(2)
Selection of the dataset
The Office-31 dataset [20] and the OfficeHome dataset [21] are currently the most widely used small-sample cross-domain test datasets. The Office-31 dataset (Figure 4) comprises 31 common object categories, distributed across three distinct data domains: amazon, DSLR, and webcam. The sample size varies among these domains, with each class ranging from a dozen to a hundred images. The Office-31 dataset, therefore, exemplifies the typical challenges posed by small sample sizes across domains. Notably, the amazon domain is the most extensive, containing 2817 images, while the webcam and DSLR domains are more compact, with 795 and 498 images, respectively. The OfficeHome dataset comprises images from four distinct fields: art images, clip art, product images, and real-world images. Within each domain, the dataset encompasses images of 65 object categories. The OfficeHome dataset is also representative of cross-domain small-sample datasets.
To address the issue of small sample size across domains, it is necessary to perform image classification with one domain serving as the source domain and the other as the target domain. This includes comparisons such as amazon > DSLR, amazon > webcam, webcam > DSLR, webcam > amazon, DSLR > amazon, DSLR > webcam. Classifying images from the amazon domain involves semi-natural composite images of real objects against a pure white background. Webcam images are those of real objects captured in real-life settings using a web camera, while DSLR images are high-definition images of real objects taken in real-life settings using a digital SLR camera. The amazon domain exhibits distribution differences due to its non-real background, and there are also visual disparities in imaging quality between webcam and DSLR images.
To address the domain adaptation challenge posed by distribution differences, this paper aims to demonstrate that SDM outperforms other state-of-the-art models in terms of classification accuracy. SDM is tested on Office31 and several other datasets that follow the standard protocol for supervised domain adaptation.
For the Office31 dataset, we randomly selected 20 images from the amazon domain as the source domain, 8 images from the DSLR and webcam domains served as the training set, 3 images from the target domain were randomly chosen for fine-tuning with labels, and the remaining images were used for testing.
For the OfficeHome dataset, all source domain data were used for training. Three images were randomly selected from the target domain as labeled data, and the remaining images served as the test set. To ensure a fair comparison, the model framework was modified to Alexnet [22,23,24], as all other models used this framework on the OfficeHome dataset.
(3)
Adjustment of the dataset
In the pre-training phase, it is essential to utilize both weak and strong enhancement techniques for model training. Here are the specifics:
(1)
Weak Image Enhancement: The initial pre-processing stage involves randomly cropping the images into smaller patches and horizontally flipping them. The patches are then resized to match the input dimensions of ResNet50, typically 224 × 224 pixels. Subsequently, these patches are converted into Tensors and normalized for further processing.
(2)
Strong Image Enhancement: SDM employs the MoCo v2 enhancement technique, which randomly crops the image to 224 × 224 pixels. The cropping ratios vary between 20% and 100% of the original image size, ensuring that the model is exposed to diverse image perspectives. Additionally, SDM randomly applies color jitter transformation with parameters (0.4, 0.4, 0.4, 0.1). These parameters govern the ranges of brightness, contrast, saturation changes, and stochasticity strength, respectively. There’s an 80% chance of applying this transformation to the image. Furthermore, SDM has a 20% probability of converting the image to a grayscale version and a 50% chance of applying Gaussian blur. Finally, the image is resized to the standard input dimensions for ResNet50, transformed into a Tensor, and normalized.
The phase of fine-tuning in the target domain employs the identical weak enhancement technique as the pre-training phase. During the test phase, only the image size was scaled and normalized.
(4)
The SDM model settings
(1) Modify the Resnet50 of the classification head
Since the Resnet50 in Pytorch is trained on ImageNet, its classification head output is a 1000-dimensional vector, which is different from the actual category dimensions of Office31 and OfficeHome. Therefore, its classification head needs to be modified. To ensure a fair comparison with other models, the classification head was modified to a fully connected layer with 1024 neurons.
(2) Source domain pre-training (Teacher–Student self-distillation)
Since SDM uses the exponential average to update the Teacher model, it needs to store the historical weights of the Teacher model and update both the Teacher model and its weights with the parameters of the Student model at the end of each training round. To enhance model training, SDM introduces a round threshold to regulate the composition of the loss function. Initially, when training time is limited, random initialization of the model’s classification head can lead to poor classification results. Using the cosine alignment loss can cause learning difficulties. However, as the model progresses in its learning, gradually increasing the weight assigned to the cosine alignment loss can enhance the model’s generalization capabilities. This includes core loss calculations, gradient updates, and exponential average updates.
The cosine similarity loss function is used to determine whether the two vectors of the input are similar. It is commonly used for nonlinear word vector learning and semi-supervised learning. For batch data with N samples, D(a,b,y), a and b represent the two vectors entered, and y represents the real category labels, which belong to {1, −1}, representing similar and dissimilar, respectively. The loss corresponding to the ith sample is as follows:
l i = 1 cos a i , b i i f   y i = 1 max 0 , cos a i , b i m a r g i n i f   y i = 1
where the label   y i = 1 and cos a i , b i < m a r g i n , l i = 0 . In this case, the input samples are not similar and the cos a i , b i is relatively small, which is an easy-to-classify sample and is not included in the loss.
When the label y i = 1 and cos a i , b i > m a r g i n , l i = cos a i , b i m a r g i n .
When the label y i = 1 , l i = 1 cos a i , b i . In particular, when the angle between a i and b i is 0, l i = 0 .
(3) Target domain fine-tuning (MixUp hybrid training)
In each batch, including image A, image B, and their corresponding labels, random numbers are generated according to the Beta distribution, α = 0.5 , β = α [25]. Subsequently, these random numbers are distributed evenly. In a situation where a random number falls within a certain range (e.g., 30% probability), the mixed images are utilized and the mixed classification loss is calculated. On the other hand, if the random number falls within another range (e.g., 70% probability), only image A is used for training.
(4) Target domain test
To determine the Top1 classification accuracy, SDM categorizes the remaining data in the target domain.

3.2. Experimental Results and Analysis

In this paper, the accuracy average value is used as the evaluation index of the model to measure the classification results of six small samples across domains. Ablation study is a common experimental method in machine learning. By performing ablation experiments, it is possible to examine the importance of a part of the model, verify the extent to which relevant features affect the model results, and evaluate the contribution of different components, parameters, or algorithms to the model effect.
(1)
Ablation experiments
To demonstrate the efficacy of the methods proposed in self-distillation and MixUp, the SDM model was initially validated through ablation experiments. These experiments involved removing the Self-Distillation module and MixUp module individually. The experimental outcomes are summarized in Table 1.
As shown in Table 1, when the self-distillation module is removed from SDM, the model’s average accuracy rate is 86.9%, which is 10.8% higher than that of the benchmark model. When the MixUp module is removed from SDM, the model’s average accuracy rate is 85.0%, which is 8.9% higher than that of the benchmark model. When SDM uses both self-distillation and MixUp, the model’s average accuracy rate is the highest, which is 2.1% and 4.0% higher than that of the models without self-distillation and MixUp, respectively.
(2)
Comparison of the results
Using the Office31 and OfficeHome datasets, SDM conducted a series of experiments for the cross-domain small sample (supervised domain adaptation) task. The experimental results are summarized in Table 2 and Table 3.
Table 2 and Table 3 show that SDM has achieved the optimal results on the Office31 dataset, with an average accuracy that is 12.9% higher than the benchmark model (Resnet-50 pre-trained only in the source domain) and 1.0% higher than the CTL model proposed in 2023.
SDM achieved an impressive average accuracy of 52.7% on the OfficeHome dataset, outperforming the optimal model on select tasks. However, the Alexnet model, due to its simplicity and lack of residual connections like ResNet, experiences difficulties in gradient updates and is prone to instability, ultimately limiting its performance. We also tested the impact of utilizing the more advanced Resnet-50 model as our backbone and observed a significant 16.0% improvement in accuracy compared to Alexnet. This finding aligns with previous research that highlights the accuracy gap between Alexnet and Resnet-50 on ImageNet datasets [26,27,28]. Specifically, Alexnet has no residual structure and is weaker than ResNet for more complex distributions. However, it is also worth pointing out that the use of VIT, which has a stronger fitting ability and more complex model, has a poor effect, mainly because VIT’s fitting ability is too strong so it overfits on a small-sample dataset, which cannot solve the problem of fewer samples.
As shown in Table 2, SDM achieved the highest score in the “Avg” (average) category on the Office31 dataset, with significant advantages in the D→W and W→D tasks. Overall, the SDM model performed best among these six tasks. As shown in Table 3, SDM performs best on average across all tasks on the OfficeHome dataset, achieving the highest performance in domain adaptation tasks.
(3)
Results analysis
To provide a more intuitive understanding of the effectiveness of the SDM method, this paper performs t-SNE analysis on three models: the benchmark model, self-distillation model, and fine-tuning model. The amazon subset of the Office31 dataset serves as the source domain, and the webcam subset serves as the target domain. Eight classifications are randomly chosen, with 10 samples selected for each classification. Each type is distinguished by a unique color for visual classification purposes. Subsequently, the feature vector of each image is computed using the trained backbone. We employ the T-SNE dimensionality reduction technique to project the high-dimensional feature vectors onto a 2-dimensional plane, which are then plotted onto the image.
After pre-training on the amazon domain, the t-SNE analysis of the pure ResNet model is shown in Figure 5, where different colors represent different categories.
After pre-training on the amazon domain, the t-SNE analysis of the Teacher–Student model is shown in Figure 6.
It is evident that the self-distillation approach, which is based on the Teacher–Student method, exhibits a smaller intra-class distance in the source domain compared to the simple ResNet after the pre-training phase. Furthermore, it demonstrates comparable generalizability in the target domain.
Furthermore, Figure 6 reveals that the pre-trained relationship not only diminishes the intra-class distance in the source domain but also enhances the inter-class distance. Even more impressively, the extracted features can be effortlessly classified through a linear classifier. Moreover, it is evident that despite the model’s lack of exposure to target domain data, samples from the target domain are still accessible. This ensures a relatively small relative intra-class distance and a certain aggregation phenomenon. This observation highlights the significance of the Teacher–Student architecture. It is because increasing the separability of feature vectors in the feature space not only elevates the inter-class distance but also diminishes the intra-class distance.
To reflect the effect of fine-tuning on domain adaptation, a visual analysis of the model after fine-tuning of the target domain using the MixUp method is shown in Figure 7.
After fine-tuning, it is evident from Figure 7 that all classes are noticeably positioned in distinct regions. The model primarily expands the inter-class distance in the target domain, thereby enhancing the model’s separability. However, the SDM algorithm still has some limitations. The detection effect of SDM depends on the quality of the input image. If there are many noise points in the input image, it will affect the detection effect.

4. Conclusions

To address the challenges of few-shot sizes in practical training deployment, this paper introduces a meta-learning-based approach to mitigate domain offset issues. Specifically, we propose two techniques: meta-learning fine-tuning and pre-training meta-learning. Additionally, we introduce a self-distillation and mixing (SDM) method, borrowing insights from the Teacher–Student framework. This approach transfers knowledge from the source domain to the target domain, leveraging self-distillation and hybrid data enhancement techniques. Experimental results demonstrate that the SDM method offers superior performance in terms of training time and accuracy, effectively transferring knowledge from the source domain to the target domain, even with a limited number of target domain samples. The SDM model can be effectively applied in sparse sample environments, such as in scenarios for detecting welding defects.

Author Contributions

Conceptualization, J.G., Y.D., J.Y. and S.L.; Data curation, J.G., Y.D., J.Y., W.X. and S.L.; Formal analysis, J.G., Y.D., J.Y., W.X. and S.L.; Funding acquisition, Y.D. and J.Y.; Investigation, J.G., Y.D., J.Y. and S.L.; Methodology, J.G., Y.D., W.X. and S.L.; Project administration, Y.D. and J.Y.; Resources, Y.D., J.Y. and S.L.; Software, J.G., Y.D., J.Y., W.X. and S.L.; Supervision, W.X. and S.L.; Validation, J.G., Y.D., J.Y., W.X. and S.L.; Visualization, Y.D., W.X. and S.L.; Writing—original draft, J.G., Y.D., J.Y., W.X. and S.L.; Writing—Review and editing, J.G., Y.D., W.X. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Hubei Province (No. 2023AFC010) and the Science Foundation of Wuhan Institute of Technology (K2023056).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Oza, P.; Sharma, P.; Patel, S. Deep ensemble transfer learning-based framework for mammographic image classification. J. Supercomput. 2023, 79, 8048–8069. [Google Scholar] [CrossRef]
  2. Sha, Y.; He, Z.; Gutierrez, H.; Du, J.; Yang, W.; Lu, X. Small sample classification based on data enhancement and its application in flip chip defection. Microelectron. Reliab. 2023, 141, 114887. [Google Scholar] [CrossRef]
  3. Gosho, M.; Ishii, R.; Noma, H.; Maruo, K. A comparison of bias-adjusted generalized estimating equations for sparse binary data in small-sample longitudinal studies. Stat. Med. 2023, 42, 2711–2727. [Google Scholar] [CrossRef] [PubMed]
  4. Kwon, Y.; Kwon, H.; Han, J.; Kang, M.; Kim, J.Y.; Shin, D.; Choi, Y.S.; Kang, S. Retention Time Prediction through Learning from a Small Training Data Set with a Pretrained Graph Neural Network. Anal. Chem. 2023, 95, 17273–17283. [Google Scholar] [CrossRef] [PubMed]
  5. Zhong, X.; Ban, H. Pre-trained network-based transfer learning: A small-sample machine learning approach to nuclear power plant classification problem. Ann. Nucl. Energy 2022, 175, 109201. [Google Scholar] [CrossRef]
  6. Wang, Z.; Liu, X.; Yu, J.; Wu, H.; Lyu, H. A general deep transfer learning framework for predicting the flow field of airfoils with small data. Comput. Fluids 2023, 251, 105738. [Google Scholar] [CrossRef]
  7. Lin, J.; Shao, H.; Min, Z.; Luo, J.; Xiao, Y.; Yan, S.; Zhou, J. Cross-domain fault diagnosis of bearing using improved semi-supervised meta-learning towards interference of out-of-distribution samples. Knowl.-Based Syst. 2022, 252, 109493. [Google Scholar] [CrossRef]
  8. Chen, Y.; Zheng, Y.; Xu, Z.; Tang, T.; Tang, Z.; Chen, J.; Liu, Y. Cross-domain few-shot classification based on lightweight Res2Net and flexible GNN. Knowl.-Based Syst. 2022, 247, 108623. [Google Scholar] [CrossRef]
  9. Guo, Y.; Codella, N.C.; Karlinsky, L.; Codella, J.V.; Smith, J.R.; Saenko, K.; Rosing, T.; Feris, R. A broader study of cross-domain few-shot learning. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XXVII 16. Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 124–141. [Google Scholar]
  10. Sukhija, S.; Krishnan, N.C.; Singh, G. Supervised Heterogeneous Domain Adaptation via Random Forests. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16), New York, NY, USA, 9–15 July 2016; pp. 2039–2045. [Google Scholar]
  11. Motiian, S.; Piccirilli, M.; Adjeroh, D.A.; Doretto, G. Unified deep supervised domain adaptation and generalization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 5715–5725. [Google Scholar]
  12. Motiian, S.; Jones, Q.; Iranmanesh, S.; Doretto, G. Few-shot adversarial domain adaptation. arXiv 2017, arXiv:1711.02536. [Google Scholar]
  13. Xu, X.; Zhou, X.; Venkatesan, R.; Swaminathan, G.; Majumder, O. d-sne: Domain adaptation using stochastic neighborhood embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2497–2506. [Google Scholar]
  14. Huang, X.; Zhou, N.; Huang, J.; Zhang, H.; Pedrycz, W.; Choi, K.-S. Center transfer for supervised domain adaptation. Appl. Intell. 2023, 53, 18277–18293. [Google Scholar] [CrossRef] [PubMed]
  15. Huang, F.; Wang, Z.; Huang, X.; Qian, Y.; Li, Z.; Chen, H. Aligning Distillation for Cold-start Item Recommendation. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ‘23). Association for Computing Machinery, New York, NY, USA, 23 July 2023; pp. 1147–1157. [Google Scholar]
  16. Xu, M.; Zhang, Z.; Hu, H.; Wang, J.; Wang, L.; Wei, F.; Bai, X.; Liu, Z. End-to-end semi-supervised object detection with soft teacher. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 3060–3069. [Google Scholar]
  17. Laine, S.; Aila, T. Temporal Ensembling for Semi-Supervised Learning. arXiv 2016, arXiv:1610.02242. [Google Scholar]
  18. Hinton, G.; Vinyals, O.; Dean, J. Distilling the knowledge in a neural network. arXiv 2015, arXiv:1503.02531. [Google Scholar]
  19. Usman, M.; Rehman, A.; Masood, S.; Khan, T.M.; Qadir, J. Intelligent healthcare system for IoMT-integrated sonography: Leveraging multi-scale self-guided attention networks and dynamic self-distillation. Internet Things 2024, 25, 101065. [Google Scholar] [CrossRef]
  20. Saenko, K.; Kulis, B.; Fritz, M.; Darrell, T. Adapting visual category models to new domains. In Proceedings of the Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, 5–11 September 2010; Proceedings, Part IV 11. Springer: Berlin/Heidelberg, Germany, 2010; pp. 213–226. [Google Scholar]
  21. Venkateswara, H.; Eusebio, J.; Chakraborty, S.; Panchanathan, S. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5018–5027. [Google Scholar]
  22. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  23. Tzeng, E.; Hoffman, J.; Darrell, T.; Saenko, K. Simultaneous deep transfer across domains and tasks. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 4068–4076. [Google Scholar]
  24. Hedegaard, L.; Sheikh-Omar, O.A.; Iosifidis, A. Supervised domain adaptation: A graph embedding perspective and a rectified experimental protocol. IEEE Trans. Image Process. 2021, 30, 8619–8631. [Google Scholar] [CrossRef] [PubMed]
  25. Zhang, H.; Cisse, M.; Dauphin, Y.N.; Lopez-Paz, D. mixup: Beyond Empirical Risk Minimization. arXiv 2017, arXiv:1710.09412. [Google Scholar]
  26. Tong, X.; Xu, X.; Huang, S.-L.; Zheng, L. A mathematical framework for quantifying transferability in multi-source transfer learning. Adv. Neural Inf. Process. Syst. 2021, 34, 26103–26116. [Google Scholar]
  27. Koniusz, P.; Tas, Y.; Porikli, F. Domain adaptation by mixture of alignments of second-or higher-order scatter tensors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4478–4487. [Google Scholar]
  28. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
Figure 1. Schematic representation of the SDM.
Figure 1. Schematic representation of the SDM.
Sensors 24 01939 g001
Figure 2. SDM model diagram in the pre-training stage.
Figure 2. SDM model diagram in the pre-training stage.
Sensors 24 01939 g002
Figure 3. SDM model schematic in the fine-tuning stage.
Figure 3. SDM model schematic in the fine-tuning stage.
Sensors 24 01939 g003
Figure 4. Example of office31 Sample (left: DSLR; middle: amazon; right: webcam).
Figure 4. Example of office31 Sample (left: DSLR; middle: amazon; right: webcam).
Sensors 24 01939 g004
Figure 5. Resnet The t-SNE analysis of the pre-training results.
Figure 5. Resnet The t-SNE analysis of the pre-training results.
Sensors 24 01939 g005
Figure 6. The t-SNE analysis after pre-training.
Figure 6. The t-SNE analysis after pre-training.
Sensors 24 01939 g006
Figure 7. t-SNE analysis after pre-training + fine-tuning.
Figure 7. t-SNE analysis after pre-training + fine-tuning.
Sensors 24 01939 g007
Table 1. Average classification accuracy (%) on the 31 Office31 dataset.
Table 1. Average classification accuracy (%) on the 31 Office31 dataset.
Method A D A W D A D W W A W D A v g
ResNet-5068.968.462.596.760.799.376.1
w/o Self- Distillation93.090.373.097.170.298.086.9
w/o Mixup87.786.268.098.369.8100.085.0
SDM(Ours)93.391.274.599.176.699.389.0
Note: ResNet-50 refers to pre-training only in the source domain. w/o self-distillation indicates that the source domain pre-training stage does not employ the self-distillation method. Additionally, w/o MixUp signifies that the target domain fine-tuning stage does not utilize MixUp for training directly. A: amazon, D: DSLR, W: webcam. The parts highlighted in bold are the maximum values.
Table 2. Average classification accuracy (%) of the different models on the 31 classes of the Office31 dataset.
Table 2. Average classification accuracy (%) of the different models on the 31 classes of the Office31 dataset.
Method A D A W D A D W W A W D A v g
ResNet-5068.968.462.596.760.799.376.1
SDADT [11,12,13,14,15,16,17,18]86.182.766.295.765.097.682.2
CCSA [7,8,9,10,11]89.088.271.896.472.197.685.8
FADA [8,9,10,11,12]88.288.168.196.471.197.584.9
d-SNE [9,10,11,12,13]91.490.171.797.571.197.186.5
DAG-LDA [19,20,21]85.987.866.597.964.299.583.6
MF [20,21,22]90.087.372.197.272.496.585.9
So-HoT [21,22,23]86.384.566.595.565.797.582.7
CTL [10,11,12,13,14]92.189.474.498.374.299.488.0
SDM(Ours)93.391.274.599.176.699.389.0
Note: ResNet-50 is only pre-trained in the source domain, and the best performance is highlighted in boldface.
Table 3. Average classification accuracy (%) of the different methods on the OfficeHome dataset.
Table 3. Average classification accuracy (%) of the different methods on the OfficeHome dataset.
DA TasksAlexnetCCSAd-SNEDAG-LDACTLSDM (Ours)SDM (Ours) *
A r C a 28.241.340.340.842.941.255.7
A r P r 39.557.358.255.361.757.176.5
A r R w 51.459.257.157.462.560.578.3
C a A r 32.042.541.541.343.841.961.1
C a P r 44.959.156.357.257.660.172.2
C a R w 47.159.158.258.459.058.571.1
P r A r 30.240.040.242.241.541.263.3
P r C a 28.744.143.944.045.145.654.8
P r R w 53.658.958.459.260.459.877.7
R w A r 43.446.446.248.250.651.770.9
R w C a 33.945.246.646.748.046.660.3
R w P r 60.666.968.267.471.368.182.0
A v g 41.151.751.351.553.752.768.7
Note: * Based on Resnet-50, other methods based on AlexNet, best performance highlighted in boldface.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, J.; Li, S.; Xia, W.; Yu, J.; Dai, Y. Research on a Cross-Domain Few-Shot Adaptive Classification Algorithm Based on Knowledge Distillation Technology. Sensors 2024, 24, 1939. https://doi.org/10.3390/s24061939

AMA Style

Gao J, Li S, Xia W, Yu J, Dai Y. Research on a Cross-Domain Few-Shot Adaptive Classification Algorithm Based on Knowledge Distillation Technology. Sensors. 2024; 24(6):1939. https://doi.org/10.3390/s24061939

Chicago/Turabian Style

Gao, Jiuyang, Siyu Li, Wenfeng Xia, Jiuyang Yu, and Yaonan Dai. 2024. "Research on a Cross-Domain Few-Shot Adaptive Classification Algorithm Based on Knowledge Distillation Technology" Sensors 24, no. 6: 1939. https://doi.org/10.3390/s24061939

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop