Abstract
The conventional way to evaluate the performance of machine learning models intrusion detection systems (IDS) is by using the same dataset to train and test. This method might lead to the bias from the computer network where the traffic is generated. Because of that, the applicability of the learned models might not be adequately evaluated. We argued in Al-Riyami et al. (ACM, pp 2195-2197 [1]) that a better way is to use cross-datasets evaluation, where we use two different datasets for training and testing. Both datasets should be generated from various networks. Using this method as it was shown in Al-Riyami et al. (ACM, pp 2195-2197 [1]) may lead to a significant drop in the performance of the learned model. This indicates that the models learn very little knowledge about the intrusion, which would be transferable from one setting to another. The reasons for such behaviour were not fully understood in Al-Riyami et al. (ACM, pp 2195-2197 [1]). In this paper, we investigate the problem and show that the main reason is the different definitions of the same feature in both models. We propose the correction and further empirically investigate cross-datasets evaluation for various machine learning methods. Further, we explored cross-dataset evaluation in the multiclass classification of attacks, and we show for most models that learning traffic normality is more robust than learning intrusions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Al-Riyami S, Coenen F, Lisitsa A (2018) A re-evaluation of intrusion detection accuracy: alternative evaluation strategy. In: Proceedings of the 2018 ACM SIGSAC conference on computer and communications security. ACM, pp 2195–2197
Tavallaee M, Bagheri E, Lu W, Ghorbani AA (2009) A detailed analysis of the KDD cup 99 data set. In: IEEE symposium on computational intelligence for security and defense applications, CISDA 2009. IEEE, pp 1–6
Perona I, Gurrutxaga I, Arbelaitz O, Martín JI, Muguerza J, Pérez JM (2008) Service-independent payload analysis to improve intrusion detection in network traffic. In: Proceedings of the 7th Australasian data mining conference, vol 87. Australian Computer Society, Inc., pp 171–178
Ho TK (1995) Random decision forests. In: Proceedings of 3rd international conference on document analysis and recognition, vol 1. IEEE, pp 278–282
Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780
Freund Y, Schapire RE (1995) A desicion-theoretic generalization of on-line learning and an application to boosting. In: European conference on computational learning theory. Springer, pp 23–37
Friedman JH (2001) Greedy function approximation: a gradient boosting machine. Ann Stat 1189–1232
Vapnik VN (1995) The nature of statistical learning. Theory
Cho K, Van Merriënboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078
Hjort N (1996) Pattern recognition and neural networks. Cambridge University Press, Cambridge
Bishop CM et al (1995) Neural networks for pattern recognition. Oxford University Press, New York
Ng A (2016) Machine learning yearning: technical strategy for AI engineers in the era of deep learning. https://www.mlyearning.org
GMLC Course (2020) Classification: precision and recall. https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall
Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830
Wikipedia (2018) Accuracy paradox. https://en.wikipedia.org/wiki/Accuracyparadox. Accessed 17 Apr 2018
McHugh J (2000) Testing intrusion detection systems: a critique of the 1998 and 1999 DARPA intrusion detection system evaluations as performed by Lincoln laboratory. ACM Trans Inform Syst Secur (TISSEC) 3(4):262–294
Mahoney MV, Chan PK (2003) An analysis of the DARPA/Lincoln laboratory evaluation data for network anomaly detection. In: International workshop on recent advances in intrusion detection. Springer, pp 220–237
Cotton M, Eggert L, Touch J, Westerlund M, Cheshire S (2011) Internet assigned numbers authority (IANA) procedures for the management of the service name and transport protocol port number registry. RFC 6335:1–33
Stolfo S, Fan W, Lee W et al (1999) KDD-CUP-99 task description
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Al-Riyami, S., Lisitsa, A., Coenen, F. (2022). Cross-Datasets Evaluation of Machine Learning Models for Intrusion Detection Systems. In: Yang, XS., Sherratt, S., Dey, N., Joshi, A. (eds) Proceedings of Sixth International Congress on Information and Communication Technology. Lecture Notes in Networks and Systems, vol 217. Springer, Singapore. https://doi.org/10.1007/978-981-16-2102-4_73
Download citation
DOI: https://doi.org/10.1007/978-981-16-2102-4_73
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-2101-7
Online ISBN: 978-981-16-2102-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)