Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/1853919.1853925acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
research-article

Which is the right source for vulnerability studies?: an empirical analysis on Mozilla Firefox

Published: 15 September 2010 Publication History

Abstract

Recent years have seen a trend towards the notion of quantitative security assessment and the use of empirical methods to analyze or predict vulnerable components. Many papers focused on vulnerability discovery models based upon either a public vulnerability databases (e.g., CVE, NVD), or vendor ones (e.g., MFSA). Some combine these databases. Most of these works address a knowledge problem: can we understand the empirical causes of vulnerabilities? Can we predict them? Still, if the data sources do not completely capture the phenomenon we are interested in predicting, then our predictor might be optimal with respect to the data we have but unsatisfactory in practice.
In our work, we focus on a more fundamental question: the quality of vulnerability database. We provide an analytical comparison of different security metric papers and the relative data sources. We also show, based on experimental data for Mozilla Firefox, how using different data sources might lead to completely different results.

References

[1]
}}O. Alhazmi and Y. Malaiya. Modeling the vulnerability discovery process. In Proc. of ISSRE'05, pages 129--138, 2005.
[2]
}}O. Alhazmi and Y. Malaiya. Application of vulnerability discovery models to major operating systems. IEEE Trans. on Reliab., 57(1):14--22, 2008.
[3]
}}O. Alhazmi, Y. Malaiya, and I. Ray. Measuring, analyzing and predicting security vulnerabilities in software systems. Comp. & Sec., 26(3):219--228, 2007.
[4]
}}R. Anderson. Security in open versus closed systems - the dance of Boltzmann, Coase and Moore. In Proc. of Open Source Soft.: Economics, Law and Policy, 2002.
[5]
}}C. Catal and B. Diri. A systematic review of software fault prediction studies. Expert Sys. with App., 36(4):7346--7354, 2009.
[6]
}}I. Chowdhury and M. Zulkernine. Using complexity, coupling, and cohesion metrics as early predictors of vul. J. of Soft. Arch., 2010.
[7]
}}S. Frei, T. Duebendorfer, and B. Plattner. Firefox (in) security update dynamics exposed. ACM SIGCOMM Comp. Comm. Rev., 39(1):16--22, 2009.
[8]
}}M. Gegick, P. Rotella, and L. Williams. Toward non-security failures as a predictor of security faults and failures. Eng. Secure Soft. and Sys., 5429:135--149, 2009.
[9]
}}M. Gegick, P. Rotella, and L. A. Williams. Predicting attack-prone components. In Proc. of IEEE ICST'09, pages 181--190, 2009.
[10]
}}L. A. Gordon and M. P. Loeb. Managing Cybersecurity Resources: a Cost-Benefit Analysis. McGraw Hill, 2006.
[11]
}}A. Jaquith. Security Metrics: Replacing Fear, Uncertainty, and Doubt. Addison-Wesley Professional, 2007.
[12]
}}Y. Jiang, B. Cuki, T. Menzies, and N. Bartlow. Comparing design and code metrics for software quality prediction. In Proc. of PROMISE'08, pages 11--18. ACM, 2008.
[13]
}}P. Manadhata, J. Wing, M. Flynn, and M. McQueen. Measuring the attack surfaces of two ftp daemons. In Proc. of QoP'06, 2006.
[14]
}}A. Meneely and L. Williams. Secure open source collaboration: An empirical study of linus' law. In Proc. of CCS'09, 2009.
[15]
}}T. Menzies, J. Greenwald, and A. Frank. Data mining static code attributes to learn defect predictors. TSE, 33(9):2--13, 2007.
[16]
}}N. Nagappan and T. Ball. Use of relative code churn measures to predict system defect density. In Proc. of ICSE'05, pages 284--292, 2005.
[17]
}}S. Neuhaus, T. Zimmermann, C. Holler, and A. Zeller. Predicting vulnerable software components. In Proc. of CCS'07, pages 529--540, October 2007.
[18]
}}H. M. Olague, S. Gholston, and S. Quattlebaum. Empirical validation of three software metrics suites to predict fault-proneness of object-oriented classes developed using highly iterative or agile software development processes. TSE, 33(6):402--419, 2007.
[19]
}}A. Ozment. The likelihood of vulnerability rediscovery and the social utility of vulnerability hunting. In Proc. of 4th Annual Workshop on Economics and Inform. Sec. (WEIS'05), 2005.
[20]
}}A. Ozment. Software security growth modeling: Examining vulnerabilities with reliability growth models. In Proc. of QoP'06, 2006.
[21]
}}A. Ozment and S. E. Schechter. Milk or wine: Does software security improve with age? In Proc. of USENIX'06, 2006.
[22]
}}E. Rescorla. Is finding security holes a good idea? IEEE Sec. and Privacy, 3(1):14--19, 2005.
[23]
}}Y. Shin and L. Williams. An empirical model to predict security vulnerabilities using code complexity metrics. In Proc. of ESEM'08, 2008.
[24]
}}Y. Shin and L. Williams. Is complexity really the enemy of software security? In Proc. of QoP'08, pages 47--50, 2008.
[25]
}}J. Sliwerski, T. Zimmermann, and A. Zeller. When do changes induce fixes? In Proc. of the 2nd Int. Working Conf. on Mining Soft. Repo. MSR('05), pages 24--28, May 2005.
[26]
}}H. Zhang and X. Zhang. Comments on data mining static code attributes to learn defect predictors. TSE, 33(9):635--637, 2007.
[27]
}}H. Zhang, X. Zhang, and M. Gu. Predicting defective software components from code complexity measures. In Procc. of PRDC'07, pages 93--96, 2007.
[28]
}}T. Zimmermann and N. Nagappan. Predicting defects with program dependencies. In Proc. of ESEM'09, 2009.
[29]
}}T. Zimmermann, R. Premraj, and A. Zeller. Predicting defects for eclipse. In Proc. of PROMISE'07, page 9. IEEE Computer Society, 2007.
[30]
}}T. Zimmermann and P. WeiSSgerber. Preprocessing cvs data for fine-grained analysis. In Proc. of the 1st Int. Working Conf. on Mining Soft. Repo. MSR('04), pages 2--6, May 2004.

Cited By

View all
  • (2024)Early and Realistic Exploitability Prediction of Just-Disclosed Software Vulnerabilities: How Reliable Can It Be?ACM Transactions on Software Engineering and Methodology10.1145/365444333:6(1-41)Online publication date: 27-Jun-2024
  • (2024)Defending novice user privacy: An evaluation of default web browser configurationsComputers & Security10.1016/j.cose.2024.103784(103784)Online publication date: Feb-2024
  • (2023)Empirical Validation of Automated Vulnerability Curation and CharacterizationIEEE Transactions on Software Engineering10.1109/TSE.2023.325047949:5(3241-3260)Online publication date: 1-May-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
MetriSec '10: Proceedings of the 6th International Workshop on Security Measurements and Metrics
September 2010
78 pages
ISBN:9781450303408
DOI:10.1145/1853919
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 September 2010

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article

Conference

ESEM '10
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)14
  • Downloads (Last 6 weeks)2
Reflects downloads up to 01 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Early and Realistic Exploitability Prediction of Just-Disclosed Software Vulnerabilities: How Reliable Can It Be?ACM Transactions on Software Engineering and Methodology10.1145/365444333:6(1-41)Online publication date: 27-Jun-2024
  • (2024)Defending novice user privacy: An evaluation of default web browser configurationsComputers & Security10.1016/j.cose.2024.103784(103784)Online publication date: Feb-2024
  • (2023)Empirical Validation of Automated Vulnerability Curation and CharacterizationIEEE Transactions on Software Engineering10.1109/TSE.2023.325047949:5(3241-3260)Online publication date: 1-May-2023
  • (2023)The anatomy of a vulnerability database: A systematic mapping studyJournal of Systems and Software10.1016/j.jss.2023.111679201(111679)Online publication date: Jul-2023
  • (2023)Evaluating the Future Device Security Risk Indicator for Hundreds of IoT DevicesSecurity and Trust Management10.1007/978-3-031-29504-1_3(52-70)Online publication date: 4-Apr-2023
  • (2022)Risk Prediction of IoT Devices Based on Vulnerability AnalysisACM Transactions on Privacy and Security10.1145/351036025:2(1-36)Online publication date: 4-May-2022
  • (2021)An improved text classification modelling approach to identify security messages in heterogeneous projectsSoftware Quality Journal10.1007/s11219-020-09546-7Online publication date: 27-May-2021
  • (2021)An empirical study of developers’ discussions about security challenges of different programming languagesEmpirical Software Engineering10.1007/s10664-021-10054-w27:1Online publication date: 1-Dec-2021
  • (2020)Exploring the Security Awareness of the Python and JavaScript Open Source CommunitiesProceedings of the 17th International Conference on Mining Software Repositories10.1145/3379597.3387513(16-20)Online publication date: 29-Jun-2020
  • (2020)Classification of Microsoft Office Vulnerabilities: A Step Ahead for Secure Software DevelopmentBio-inspired Neurocomputing10.1007/978-981-15-5495-7_21(381-402)Online publication date: 22-Jul-2020
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media