Nothing Special   »   [go: up one dir, main page]

skip to main content
10.5555/3020299.3020306guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Using Bayesian attack detection models to drive cyber deception

Published: 27 July 2014 Publication History

Abstract

We present a method to devise, execute, and assess a cyber deception. The aim is to cause an adversary to believe they are under a cyber attack when in fact they are not. Cyber network defense relies on human and computational systems that can reason over multiple individual evidentiary items to detect the presence of meta events, i.e., cyber attacks. Many of these systems aggregate and reason over alerts from Network-based Intrusion Detection Systems (NIDS). Such systems use byte patterns as attack signatures to analyze network traffic and generate corresponding alerts. Current aggregation and reasoning tools use a variety of techniques to model meta-events, among them Bayesian Networks. However, the inputs to these models are based on network traffic which is inherently subject to manipulation. In this work, we demonstrate a capability to remotely and artificially trigger specific meta events in a potentially unknown model. We use an existing and known Bayesian Network based cyber attack detection system to guide construction of deceptive network packets. These network packets are not actual attacks or exploits, but rather contain selected features of attack traffic embedded in benign content. We provide these packets to a different cyber attack detection system to gauge their generalizability and effect. We combine the deception packets' characteristics, the second system's response, and external observables to propose a deception model to assess the effectiveness of the manufactured network traffic on our target. We demonstrate the development and execution of a specific deception, and we propose the corresponding deception model.

References

[1]
Anderson, J. P. (1980). Computer security threat monitoring and surveillance (Vol. 17). Technical report, James P. Anderson Company, Fort Washington, Pennsylvania.
[2]
Bell, J. B., & Whaley, B. (1982). Cheating: deception in war & magic, games & sports, sex & religion, business & con games, politics & espionage, art & science. St Martin's Press.
[3]
Bell, J. B., & Whaley, B. (1991). Cheating and deception. Transaction Publishers.
[4]
Boukhtouta, A., Lakhdari, N. E., Mokhov, S. A., & Debbabi, M. (2013). Towards fingerprinting malicious traffic. Procedia Computer Science, 19, 548-555.
[5]
Bowen, B. M., Hershkop, S., Keromytis, A. D., & Stolfo, S. J. (2009). Baiting inside attackers using decoy documents (pp. 51-70). Springer Berlin Heidelberg.
[6]
Cohen, F. (1998). A note on the role of deception in information protection. Computers & Security, 17(6), 483-506.
[7]
Cohen, F. (1998). The deception toolkit. Risks Digest, 19.
[8]
Crouse, M. B. (2012). Performance Analysis of Cyber Deception Using Probabilistic Models (Master's Thesis, Wake Forest University).
[9]
Heberlein, L. T., Dias, G. V., Levitt, K. N., Mukherjee, B., Wood, J., & Wolber, D. (1990, May). A network security monitor. In Research in Security and Privacy, 1990. Proceedings., 1990 IEEE Computer Society Symposium on (pp. 296-304). IEEE.
[10]
Hofmann, A., & Sick, B. (2011). Online intrusion alert aggregation with generative data stream modeling. Dependable and Secure Computing, IEEE Transactions on, 8(2), 282-294.
[11]
Hussein, S. M., Ali, F. H. M., & Kasiran, Z. (2012, May). Evaluation effectiveness of hybrid IDs using snort with naive Bayes to detect attacks. In Digital Information and Communication Technology and it's Applications (DICTAP), 2012 Second International Conference on (pp. 256-260). IEEE.
[12]
Ignatius, D. (2007). Body of Lies. WW Norton & Company.
[13]
Ismail, I., Mohd Nor, S., & Marsono, M. N. (2014). Stateless Malware Packet Detection by Incorporating Naive Bayes with Known Malware Signatures. Applied Computational Intelligence and Soft Computing, 2014.
[14]
Jones, J. and Beisel, C. (2014) Extraction and Reasoning over Network Data to Detect Novel Cyber Attacks. National Cybersecurity Institute Journal. Volume 1, Number 1.
[15]
Kewley, D., Fink, R., Lowry, J., & Dean, M. (2001). Dynamic approaches to thwart adversary intelligence gathering. In DARPA Information Survivability Conference & Exposition II, 2001. DISCEX'01. Proceedings (Vol. 1, pp. 176-185). IEEE.
[16]
Montagu, E., & Joyce, P. (1954). The man who never was. Lippincott.
[17]
Murphy, S. B., McDonald, J. T., & Mills, R. F. (2010). An Application of Deception in Cyberspace: Operating System Obfuscation1. In Proceedings of the 5th International Conference on Information Warfare and Security (ICIW 2010) (pp. 241-249).
[18]
Patton, S., Yurcik, W., & Doss, D. (2001). An Achilles' heel in signature-based IDS: Squealing false positives in SNORT. Proceedings of RAID 2001.
[19]
Ragsdale, D. (2011). Scalable Cyber Deception. Defense Advanced Research Projects Agency, Arlington, Virginia, Information Innovation Office.
[20]
Rowe, N. C. (2003, June). Counterplanning deceptions to foil cyber-attack plans. In Information Assurance Workshop, 2003. IEEE Systems, Man and Cybernetics Society (pp. 203-210). IEEE.
[21]
Rowe, N. (2007, March). Planning cost-effective deceptive resource denial in defense to cyber-attacks. In Proceedings of the 2nd International Conference on Information Warfare & Security (p. 177). Academic Conferences Limited.
[22]
Tan, K. L. G. (2003). Confronting cyberterrorism with cyber deception (Doctoral dissertation, Monterey, California. Naval Postgraduate School).
[23]
Tylman, W. (2009) Detecting Computer Intrusions with Bayesian Networks. Intelligent Data Engineering and Automated Learning - IDEAL 2009. Lecture Notes in Computer Science Volume 5788, 2009, pp 82-91.
[24]
Tzu, S. (2013). The art of war. Orange Publishing.
[25]
Valdes, A., & Skinner, K. (2001, January). Probabilistic alert correlation. In Recent Advances in Intrusion Detection (pp. 54-68). Springer Berlin Heidelberg.
[26]
Whaley, B. (1982). Toward a general theory of deception. The Journal of Strategic Studies, 5(1), 178-192.
[27]
Williams, J., & Torres, A. (2014). ADD - Complicating Memory Forensics Through Memory Disarray. Presented at ShmooCon 2014 and archived at https://archive.org/details/ShmooCon2014_ADD_Complicating_Memory_Forensics_Through_Memory_Disarray. Retrieved June 8, 2014.
[28]
Yuill, J., Denning, D. E., & Feer, F. (2006). Using deception to hide things from hackers: Processes, principles, and techniques. North Carolina State University at Raleigh, Department of Computer Science.
[29]
Zhai, Y., Ning, P., Iyer, P., & Reeves, D. S. (2004, December). Reasoning about complementary intrusion evidence. In Computer Security Applications Conference, 2004. 20th Annual (pp. 39-48). IEEE.
[30]
Zomlot, L., Sundaramurthy, S. C., Luo, K., Ou, X., & Rajagopalan, S. R. (2011, October). Prioritizing intrusion analysis using Dempster-Shafer theory. In Proceedings of the 4th ACM workshop on Security and artificial intelligence (pp. 59-70). ACM.
  1. Using Bayesian attack detection models to drive cyber deception

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Guide Proceedings
    BMAW'14: Proceedings of the Eleventh UAI Conference on Bayesian Modeling Applications Workshop - Volume 1218
    July 2014
    101 pages
    • Editors:
    • Kathryn Blackmond Laskey,
    • Jim Jones,
    • Russell Almond

    Publisher

    CEUR-WS.org

    Aachen, Germany

    Publication History

    Published: 27 July 2014

    Author Tags

    1. Bayesian model
    2. cyber attack
    3. cyber deception
    4. deception model
    5. intrusion detection system

    Qualifiers

    • Article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 0
      Total Downloads
    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 03 Jan 2025

    Other Metrics

    Citations

    View Options

    View options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media