# Securing Network Traffic Classification Models against Adversarial Examples Using Derived Variables

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

**Figure 1.**Adversarial attack in computer vision [21].

- A method that improves the robustness of ML-based network traffic classification models against adversarial attack is proposed by integrating DVars into training. The DVars follows the logic of adding randomness to the input data. In particular, our proposed approach preserves the underlying logic and maliciousness of the network flow.
- Evaluation of the proposed method on the CSE-CIC-IDS-2018 dataset, with a specific focus on improving the accuracy of network traffic classification models when subjected to AEs. According to experimentation and analysis, our approach shows considerable improvements in the performance of the models.
- Investigation of the impact of AEs on ML-based network traffic classification models. Using experiments and analysis, we explore the effect of AEs on the performance and robustness of the investigated models.

## 2. Related Work

#### 2.1. ML-Based Network Traffic Classification

#### 2.2. Common AE Generation Methods

#### 2.2.1. Fast Gradient Sign Method (FGSM) Attack

#### 2.2.2. Project Gradient Descent (PGD) Attack

#### 2.2.3. Jacobian-Based Saliency Map Attack (JSMA)

#### 2.3. Common Defense Methods against AEs

## 3. AE Impact on Network Traffic Classifiers

#### 3.1. The Target Models

#### 3.2. Generating AEs for Non-Gradient-Based Models

#### 3.3. Domain Constraints in AE Generation

#### 3.4. Using AdverNet to Attack Target Models

## 4. DVars for Adversarial Attack Defense

#### 4.1. Feature Selection

#### 4.2. DVar Algorithms

#### 4.2.1. The Sum of Adjacent Values

Algorithm 1: Derivation of representative features using the sum of adjacent values. |

Data: Baseline dataset X |

Result: Matrix of representative features Y |

#### 4.2.2. The Absolute Difference in Adjacent Values

Algorithm 2: Derivation of representative features using the absolute difference in adjacent values. |

Data: Baseline dataset X |

Result: Matrix of representative features Y |

## 5. Dataset and Performance Metrics

#### 5.1. Dataset

#### 5.2. Performance Metrics

- Accuracy: This is a common metric used to evaluate the performance of ML models. It measures the proportion of correctly classified samples relative to the total number of samples.$$\mathrm{Accuracy}:\frac{\mathrm{TP}\phantom{\rule{4.pt}{0ex}}+\phantom{\rule{4.pt}{0ex}}\mathrm{TN}}{\mathrm{TP}\phantom{\rule{4.pt}{0ex}}+\phantom{\rule{4.pt}{0ex}}\mathrm{FP}\phantom{\rule{4.pt}{0ex}}+\phantom{\rule{4.pt}{0ex}}\mathrm{TN}\phantom{\rule{4.pt}{0ex}}+\phantom{\rule{4.pt}{0ex}}\mathrm{FN}}$$
- Precision: This metric measures the proportion of true-positive predictions to the total number of positive predictions.$$\mathrm{Precision}:\frac{\mathrm{TP}}{\mathrm{TP}\phantom{\rule{4.pt}{0ex}}+\phantom{\rule{4.pt}{0ex}}\mathrm{FP}}$$
- Recall: This metric measures the proportion of true-positive predictions to the total number of positive samples in the dataset.$$\mathrm{Recall}:\frac{\mathrm{TP}}{\mathrm{TP}\phantom{\rule{4.pt}{0ex}}+\phantom{\rule{4.pt}{0ex}}\mathrm{FN}}$$
- F1-score: This metric is the harmonic mean of precision and recall, providing a single score that balances both metrics. It ranges from 0 to 1, with higher values indicating better performance.$$\mathrm{F}1-\mathrm{score}:2\times \frac{\mathrm{Precision}\phantom{\rule{4.pt}{0ex}}\times \phantom{\rule{4.pt}{0ex}}\mathrm{Recall}}{\mathrm{Precision}\phantom{\rule{4.pt}{0ex}}+\phantom{\rule{4.pt}{0ex}}\mathrm{Recall}}$$

- Adversarial Accuracy: Adversarial accuracy measures the percentage of correct predictions made by the model on the AEs. This metric provides a measure of the model’s ability to correctly classify AEs.$$\mathrm{Adv}\phantom{\rule{4.pt}{0ex}}\mathrm{Acc}=\frac{{\mathrm{TP}}_{\mathrm{adv}}+{\mathrm{TN}}_{\mathrm{adv}}}{{\mathrm{TP}}_{\mathrm{adv}}+{\mathrm{FP}}_{\mathrm{adv}}+{\mathrm{TN}}_{\mathrm{adv}}+{\mathrm{FN}}_{\mathrm{adv}}}$$

## 6. Results and Analysis

#### 6.1. Experimental Setup

#### 6.2. Comparative Analysis

#### 6.3. Scalability

## 7. Conclusions and Future Research

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Jmila, H.; Khedher, M.I. Adversarial machine learning for network intrusion detection: A comparative study. Comput. Networks
**2022**, 214, 109073. [Google Scholar] [CrossRef] - Fu, C.; Li, Q.; Shen, M.; Xu, K. Frequency domain feature based robust malicious traffic detection. IEEE/ACM Trans. Netw.
**2022**, 31, 452–467. [Google Scholar] [CrossRef] - Wang, C.; Chen, J.; Yang, Y.; Ma, X.; Liu, J. Poisoning attacks and countermeasures in intelligent networks: Status quo and prospects. Digit. Commun. Networks
**2022**, 8, 225–234. [Google Scholar] [CrossRef] - Pawlicki, M.; Choraś, M.; Kozik, R. Defending network intrusion detection systems against adversarial evasion attacks. Future Gener. Comput. Syst.
**2020**, 110, 148–154. [Google Scholar] [CrossRef] - Chan, P.P.; Zheng, J.; Liu, H.; Tsang, E.C.; Yeung, D.S. Robustness analysis of classical and fuzzy decision trees under adversarial evasion attack. Appl. Soft Comput.
**2021**, 107, 107311. [Google Scholar] [CrossRef] - Apruzzese, G.; Colajanni, M.; Marchetti, M. Evaluating the effectiveness of adversarial attacks against botnet detectors. In Proceedings of the 2019 IEEE 18th International Symposium on Network Computing and Applications (NCA), Cambridge, MA USA, 26–28 September 2019; IEEE: Piscataway Township, NJ, USA, 2019; pp. 1–8. [Google Scholar]
- Biggio, B.; Corona, I.; Maiorca, D.; Nelson, B.; Šrndić, N.; Laskov, P.; Giacinto, G.; Roli, F. Evasion attacks against machine learning at test time. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Prague, Czech Republic, 22–26 September 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 387–402. [Google Scholar]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv
**2013**, arXiv:1312.6199. [Google Scholar] - Ibitoye, O.; Shafiq, O.; Matrawy, A. Analyzing adversarial attacks against deep learning for intrusion detection in IoT networks. In Proceedings of the 2019 IEEE Global Communications Conference (GLOBECOM), Big Island, HI, USA, 9–13 December 2019; IEEE: Piscataway Township, NJ, USA, 2019; pp. 1–6. [Google Scholar]
- Martins, N.; Cruz, J.M.; Cruz, T.; Abreu, P.H. Adversarial machine learning applied to intrusion and malware scenarios: A systematic review. IEEE Access
**2020**, 8, 35403–35419. [Google Scholar] [CrossRef] - Apruzzese, G.; Andreolini, M.; Colajanni, M.; Marchetti, M. Hardening random forest cyber detectors against adversarial attacks. IEEE Trans. Emerg. Top. Comput. Intell.
**2020**, 4, 427–439. [Google Scholar] [CrossRef] - Apruzzese, G.; Andreolini, M.; Ferretti, L.; Marchetti, M.; Colajanni, M. Modeling realistic adversarial attacks against network intrusion detection systems. Digit. Threat. Res. Pract. (DTRAP)
**2022**, 3, 1–19. [Google Scholar] [CrossRef] - Aiken, J.; Scott-Hayward, S. Investigating adversarial attacks against network intrusion detection systems in sdns. In Proceedings of the 2019 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Dallas, TX, USA, 12–14 November 2019; IEEE: Piscataway Township, NJ, USA, 2019; pp. 1–7. [Google Scholar]
- Han, D.; Wang, Z.; Zhong, Y.; Chen, W.; Yang, J.; Lu, S.; Shi, X.; Yin, X. Evaluating and improving adversarial robustness of machine learning-based network intrusion detectors. IEEE J. Sel. Areas Commun.
**2021**, 39, 2632–2647. [Google Scholar] [CrossRef] - Wang, J.; Qixu, L.; Di, W.; Dong, Y.; Cui, X. Crafting adversarial example to bypass flow-&ML-based botnet detector via RL. In Proceedings of the 24th International Symposium on Research in Attacks, Intrusions and Defenses, San Sebastian, Spain, 6–8 October 2021; pp. 193–204. [Google Scholar]
- Zhang, H.; Wang, J. Defense against adversarial attacks using feature scattering-based adversarial training. Adv. Neural Inf. Process. Syst.
**2019**, 32, 1831–1841. [Google Scholar] - Carlini, N.; Athalye, A.; Papernot, N.; Brendel, W.; Rauber, J.; Tsipras, D.; Goodfellow, I.; Madry, A.; Kurakin, A. On evaluating adversarial robustness. arXiv
**2019**, arXiv:1902.06705. [Google Scholar] - Wong, E.; Rice, L.; Kolter, J.Z. Fast is better than free: Revisiting adversarial training. arXiv
**2020**, arXiv:2001.03994. [Google Scholar] - Feinman, R.; Curtin, R.R.; Shintre, S.; Gardner, A.B. Detecting adversarial samples from artifacts. arXiv
**2017**, arXiv:1703.00410. [Google Scholar] - Wang, J.; Pan, J.; AlQerm, I.; Liu, Y. Def-ids: An ensemble defense mechanism against adversarial attacks for deep learning-based network intrusion detection. In Proceedings of the 2021 International Conference on Computer Communications and Networks (ICCCN), Athens, Greece, 19–22 July 2021; IEEE: Piscataway Township, NJ, USA, 2021; pp. 1–9. [Google Scholar]
- Edwards, D.; Rawat, D.B. Study of adversarial machine learning with infrared examples for surveillance applications. Electronics
**2020**, 9, 1284. [Google Scholar] [CrossRef] - Vitorino, J.; Praça, I.; Maia, E. SoK: Realistic adversarial attacks and defenses for intelligent network intrusion detection. Comput. Secur.
**2023**, 134, 103433. [Google Scholar] [CrossRef] - Mohanty, H.; Roudsari, A.H.; Lashkari, A.H. Robust stacking ensemble model for darknet traffic classification under adversarial settings. Comput. Secur.
**2022**, 120, 102830. [Google Scholar] [CrossRef] - Zaki, F.A.M.; Chin, T.S. FWFS: Selecting robust features towards reliable and stable traffic classifier in SDN. IEEE Access
**2019**, 7, 166011–166020. [Google Scholar] [CrossRef] - Cao, J.; Wang, D.; Qu, Z.; Sun, H.; Li, B.; Chen, C.L. An improved network traffic classification model based on a support vector machine. Symmetry
**2020**, 12, 301. [Google Scholar] [CrossRef] - Bhatia, M.; Sharma, V.; Singh, P.; Masud, M. Multi-level P2P traffic classification using heuristic and statistical-based techniques: A hybrid approach. Symmetry
**2020**, 12, 2117. [Google Scholar] [CrossRef] - Dey, S.; Ye, Q.; Sampalli, S. A machine learning based intrusion detection scheme for data fusion in mobile clouds involving heterogeneous client networks. Inf. Fusion
**2019**, 49, 205–215. [Google Scholar] [CrossRef] - Rust-Nguyen, N.; Sharma, S.; Stamp, M. Darknet Traffic Classification and Adversarial Attacks Using Machine Learning. Comput. Secur.
**2023**, 103098. [Google Scholar] [CrossRef] - Lin, Z.; Shi, Y.; Xue, Z. Idsgan: Generative adversarial networks for attack generation against intrusion detection. In Proceedings of the Advances in Knowledge Discovery and Data Mining: 26th Pacific-Asia Conference, PAKDD 2022, Chengdu, China, 16–19 May 2022; Proceedings, Part III. Springer: Berlin/Heidelberg, Germany, 2022; pp. 79–91. [Google Scholar]
- Alhajjar, E.; Maxwell, P.; Bastian, N. Adversarial machine learning in network intrusion detection systems. Expert Syst. Appl.
**2021**, 186, 115782. [Google Scholar] [CrossRef] - Asadi, M.; Jamali, M.A.J.; Parsa, S.; Majidnezhad, V. Detecting botnet by using particle swarm optimization algorithm based on voting system. Future Gener. Comput. Syst.
**2020**, 107, 95–111. [Google Scholar] [CrossRef] - Capuano, N.; Fenza, G.; Loia, V.; Stanzione, C. Explainable Artificial Intelligence in CyberSecurity: A Survey. IEEE Access
**2022**, 10, 93575–93600. [Google Scholar] [CrossRef] - McCarthy, A.; Ghadafi, E.; Andriotis, P.; Legg, P. Defending against adversarial machine learning attacks using hierarchical learning: A case study on network traffic attack classification. J. Inf. Secur. Appl.
**2023**, 72, 103398. [Google Scholar] [CrossRef] - Qian, Y.; Lu, H.; Ji, S.; Zhou, W.; Wu, S.; Yun, B.; Tao, X.; Lei, J. Adversarial example generation based on particle swarm optimization. J. Electron. Inf. Technol.
**2019**, 41, 1658–1665. [Google Scholar] - Usama, M.; Qayyum, A.; Qadir, J.; Al-Fuqaha, A. Black-box adversarial machine learning attack on network traffic classification. In Proceedings of the 2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC), Tangier, Morocco, 24–28 June 2019; IEEE: Piscataway Township, NJ, USA, 2019; pp. 84–89. [Google Scholar]
- Xu, H.; Ma, Y.; Liu, H.C.; Deb, D.; Liu, H.; Tang, J.L.; Jain, A.K. Adversarial attacks and defenses in images, graphs and text: A review. Int. J. Autom. Comput.
**2020**, 17, 151–178. [Google Scholar] [CrossRef] - Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. arXiv
**2014**, arXiv:1412.6572. [Google Scholar] - Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards deep learning models resistant to adversarial attacks. arXiv
**2017**, arXiv:1706.06083. [Google Scholar] - Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z.B.; Swami, A. The limitations of deep learning in adversarial settings. In Proceedings of the 2016 IEEE European Symposium on Security and Privacy (EuroS&P), Delft, The Netherlands, 3–7 July 2016; IEEE: Piscataway Township, NJ, USA, 2016; pp. 372–387. [Google Scholar]
- Yuan, X.; He, P.; Zhu, Q.; Li, X. Adversarial examples: Attacks and defenses for deep learning. IEEE Trans. Neural Networks Learn. Syst.
**2019**, 30, 2805–2824. [Google Scholar] [CrossRef] [PubMed] - Chakraborty, A.; Alam, M.; Dey, V.; Chattopadhyay, A.; Mukhopadhyay, D. A survey on adversarial attacks and defences. CAAI Trans. Intell. Technol.
**2021**, 6, 25–45. [Google Scholar] [CrossRef] - Qiu, S.; Liu, Q.; Zhou, S.; Wu, C. Review of artificial intelligence adversarial attack and defense technologies. Appl. Sci.
**2019**, 9, 909. [Google Scholar] [CrossRef] - Zhang, L.; Qi, G.J. Wcp: Worst-case perturbations for semi-supervised deep learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 3912–3921. [Google Scholar]
- Bai, T.; Luo, J.; Zhao, J.; Wen, B.; Wang, Q. Recent advances in adversarial training for adversarial robustness. arXiv
**2021**, arXiv:2102.01356. [Google Scholar] - Zhang, J.; Li, C. Adversarial examples: Opportunities and challenges. IEEE Trans. Neural Networks Learn. Syst.
**2019**, 31, 2578–2593. [Google Scholar] [CrossRef] [PubMed] - Anthi, E.; Williams, L.; Javed, A.; Burnap, P. Hardening machine learning denial of service (DoS) defences against adversarial attacks in IoT smart home networks. Comput. Secur.
**2021**, 108, 102352. [Google Scholar] [CrossRef] - Abou Khamis, R.; Matrawy, A. Evaluation of adversarial training on different types of neural networks in deep learning-based idss. In Proceedings of the 2020 International Symposium on Networks, Computers and Communications (ISNCC), Montreal, QC, Canada, 20–22 October 2020; IEEE: Piscataway Township, NJ, USA, 2020; pp. 1–6. [Google Scholar]
- Chollet, F. Keras. 2015. Available online: https://github.com/fchollet/keras (accessed on 15 September 2023).
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res.
**2011**, 12, 2825–2830. [Google Scholar] - Nicolae, M.I.; Sinn, M.; Tran, M.N.; Buesser, B.; Rawat, A.; Wistuba, M.; Zantedeschi, V.; Baracaldo, N.; Chen, B.; Ludwig, H.; et al. Adversarial Robustness Toolbox v1. 0.0. arXiv
**2018**, arXiv:1807.01069. [Google Scholar] - Papernot, N.; McDaniel, P.; Goodfellow, I.; Jha, S.; Celik, Z.B.; Swami, A. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, Abu Dhabi, United Arab Emirates, 2–6 April 2017; pp. 506–519. [Google Scholar]
- Debicha, I.; Cochez, B.; Kenaza, T.; Debatty, T.; Dricot, J.M.; Mees, W. Adv-Bot: Realistic Adversarial Botnet Attacks against Network Intrusion Detection Systems. Comput. Secur.
**2023**, 129, 103176. [Google Scholar] [CrossRef] - Merzouk, M.A.; Cuppens, F.; Boulahia-Cuppens, N.; Yaich, R. Investigating the practicality of adversarial evasion attacks on network intrusion detection. Ann. Telecommun.
**2022**, 77, 763–775. [Google Scholar] [CrossRef] - Teuffenbach, M.; Piatkowska, E.; Smith, P. Subverting network intrusion detection: Crafting adversarial examples accounting for domain-specific constraints. In Proceedings of the Machine Learning and Knowledge Extraction: 4th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2020, Dublin, Ireland, 25–28 August 2020; Proceedings 4. Springer: Berlin/Heidelberg, Germany, 2020; pp. 301–320. [Google Scholar]
- Zhou, Y.; Cheng, G.; Jiang, S.; Dai, M. Building an efficient intrusion detection system based on feature selection and ensemble classifier. Comput. Networks
**2020**, 174, 107247. [Google Scholar] [CrossRef] - Jiang, H.; Lin, J.; Kang, H. FGMD: A robust detector against adversarial attacks in the IoT network. Future Gener. Comput. Syst.
**2022**, 132, 194–210. [Google Scholar] [CrossRef] - Canadian Institute for Cybersecurity. CSE-CIC-IDS2018 on AWS. 2018. Available online: https://www.unb.ca/cic/datasets/ids-2018.html (accessed on 15 September 2023).
- Pujari, M.; Pacheco, Y.; Cherukuri, B.; Sun, W. A Comparative Study on the Impact of Adversarial Machine Learning Attacks on Contemporary Intrusion Detection Datasets. SN Comput. Sci.
**2022**, 3, 412. [Google Scholar] [CrossRef] - Pujari, M.; Cherukuri, B.P.; Javaid, A.Y.; Sun, W. An approach to improve the robustness of machine learning based intrusion detection system models against the carlini-wagner attack. In Proceedings of the 2022 IEEE International Conference on Cyber Security and Resilience (CSR), Virtual Conference, 27–29 July 2022; IEEE: Piscataway Township, NJ, USA, 2022; pp. 62–67. [Google Scholar]
- Shu, R.; Xia, T.; Williams, L.; Menzies, T. Omni: Automated ensemble with unexpected models against adversarial evasion attack. Empir. Softw. Eng.
**2022**, 27, 1–32. [Google Scholar] [CrossRef]

**Figure 2.**Adversarial attack on network traffic classification systems [22].

**Figure 6.**The baseline results showing accuracy, precision, recall, and F1-score with respect to each class of the dataset.

**Figure 7.**The results with the AE effect showing accuracy with respect to the perturbation intensity.

**Figure 8.**The confusion matrix showing results with AEs present with respect to low perturbation intensity ($\u03f5$ = 0.01).

**Figure 9.**The results with defense effect, showing accuracy, precision, recall, and F1-score with respect to perturbation intensity.

**Figure 10.**The confusion matrix showing robustness of DVars against AEs with respect to low perturbation intensity ($\u03f5$ = 0.01).

**Table 1.**A representation of DVars using both the sum and absolute difference in adjacent flow features; C indicates the condition.

Sbytes | Dbytes | Spkts | Dpkts | C |
---|---|---|---|---|

sbytes${}_{1}$ | dbytes${}_{1}$ | spkts${}_{1}$ | dpkts${}_{1}$ | — |

⋯ | ⋯ | ⋯ | ⋯ | — |

${\mathrm{sbytes}}_{n+1}$ | ${\mathrm{dbytes}}_{n+1}$ | ${\mathrm{spkts}}_{n+1}$ | ${\mathrm{dpkts}}_{n+1}$ | Equation (7) |

$|{\mathrm{spkts}}_{n-1}|$ | $|{\mathrm{dbytes}}_{n-1}|$ | $|{\mathrm{spkts}}_{n-1}|$ | $|{\mathrm{dpkts}}_{n-1}|$ | Equation (8) |

sbytes${}_{n}$ | dbytes${}_{n}$ | spkts${}_{n}$ | dpkts${}_{n}$ | — |

Benign | GoldenEye | Slowloris | |
---|---|---|---|

Before preprocessing (raw dataset) | |||

Class counts | 996,077 | 41,508 | 10,990 |

Proportion of total | 94.99% | 3.96% | 1.05% |

After preprocessing (re-sampled dataset) | |||

Class counts | 21,200 | 18,400 | 10,800 |

Proportion of total | 42.06% | 36.51% | 21.43% |

Model | Parameters |
---|---|

AdvNet | Layer 1 = 128, Layer 2 = 64, Layer 3 = 3, activation = Relu, optimizer = adam, output_layer_activation = softmax, epochs = 50 |

DT | Criterion = gini, max_depth = 12 |

RF | Number of estimators = 170, random_state = 4, min_sample_split = 5 |

KNN | Number of neighbors = 10, distance metric = Euclidean |

JSMA | $\theta =1.0$, $\gamma =0.1$, clip min = 0.0, clip max = 1.0 |

Acc (%) | Prec (%) | Rec (%) | F1 (%) | |
---|---|---|---|---|

AdverNet | 0.98 | 0.98 | 0.98 | 0.98 |

DT | 1.00 | 1.00 | 1.00 | 1.00 |

RF | 0.99 | 0.99 | 0.99 | 0.99 |

KNN | 0.99 | 0.99 | 0.99 | 0.99 |

**Table 5.**Comparison of baseline performance of the leading models using the CSE-CIC-IDS2018 dataset.

Work | Model | Acc (%) | Prec (%) | Rec (%) | F1 (%) |
---|---|---|---|---|---|

Apruzzes et al. [6] | KNN | - | 0.99 | 0.99 | 0.99 |

Pujari et al. [58] | RF | 0.92 | - | 0.91 | 0.94 |

Pujari et al. [59] | RF | 0.91 | - | 0.91 | 0.94 |

Shu et al. [60] | N/A | 0.94 | - | - | - |

Ours | KNN | 0.99 | 0.99 | 0.99 | 0.99 |

Work | Attack | Acc (%) | Rec (%) | F1 (%) |
---|---|---|---|---|

Apruzzes et al. [6] | Self | - | 0.48 | - |

Pujari et al. [58] | JSMA | 0.84 | 0.57 | 0.59 |

Pujari et al. [59] | C&W | 0.81 | 0.81 | 0.83 |

Shu et al. [60] | JSMA | 0.94 | - | - |

Ours | JSMA | 0.45 | 0.48 | 0.44 |

**Table 7.**Comparison of defense effectiveness of the leading models using the CSE-CIC-IDS2018 dataset.

Work | Attack | Acc (%) | Rec (%) | F1 (%) |
---|---|---|---|---|

Apruzzes et al. [6] | RAF | - | 0.90 | 0.82 |

Pujari et al. [59] | GAN | 0.82 | 0.83 | 0.84 |

Shu et al. [60] | A2 | 0.64 | - | - |

A3 | 0.51 | - | - | |

A4 | 0.63 | - | - | |

A5 | 0.78 | - | - | |

Ours | DVars | 0.84 | 0.84 | 0.83 |

Intensity ($\mathit{\u03f5}$) | Model | Acc (%) | Prec (%) | Rec (%) | F1 (%) |
---|---|---|---|---|---|

Baseline | DT | 1.00 | 1.00 | 1.00 | 1.00 |

RF | 0.99 | 0.99 | 0.99 | 0.99 | |

KNN | 0.99 | 0.99 | 0.99 | 0.99 | |

Low | DT | 0.72 | 0.81 | 0.72 | 0.65 |

RF | 0.72 | 0.81 | 0.72 | 0.65 | |

KNN | 0.84 | 0.87 | 0.84 | 0.83 | |

Medium | DT | 0.67 | 0.50 | 0.67 | 0.56 |

RF | 0.67 | 0.50 | 0.67 | 0.56 | |

KNN | 0.72 | 0.79 | 0.72 | 0.64 | |

High | DT | 0.67 | 0.50 | 0.67 | 0.56 |

RF | 0.67 | 0.50 | 0.67 | 0.56 | |

KNN | 0.66 | 0.67 | 0.66 | 0.55 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Adeke, J.M.; Liu, G.; Zhao, J.; Wu, N.; Bashir, H.M.
Securing Network Traffic Classification Models against Adversarial Examples Using Derived Variables. *Future Internet* **2023**, *15*, 405.
https://doi.org/10.3390/fi15120405

**AMA Style**

Adeke JM, Liu G, Zhao J, Wu N, Bashir HM.
Securing Network Traffic Classification Models against Adversarial Examples Using Derived Variables. *Future Internet*. 2023; 15(12):405.
https://doi.org/10.3390/fi15120405

**Chicago/Turabian Style**

Adeke, James Msughter, Guangjie Liu, Junjie Zhao, Nannan Wu, and Hafsat Muhammad Bashir.
2023. "Securing Network Traffic Classification Models against Adversarial Examples Using Derived Variables" *Future Internet* 15, no. 12: 405.
https://doi.org/10.3390/fi15120405