Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/1985441.1985457acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Security versus performance bugs: a case study on Firefox

Published: 21 May 2011 Publication History

Abstract

A good understanding of the impact of different types of bugs on various project aspects is essential to improve software quality research and practice. For instance, we would expect that security bugs are fixed faster than other types of bugs due to their critical nature. However, prior research has often treated all bugs as similar when studying various aspects of software quality (e.g., predicting the time to fix a bug), or has focused on one particular type of bug (e.g., security bugs) with little comparison to other types. In this paper, we study how different types of bugs (performance and security bugs) differ from each other and from the rest of the bugs in a software project. Through a case study on the Firefox project, we find that security bugs are fixed and triaged much faster, but are reopened and tossed more frequently. Furthermore, we also find that security bugs involve more developers and impact more files in a project. Our work is the first work to ever empirically study performance bugs and compare it to frequently-studied security bugs. Our findings highlight the importance of considering the different types of bugs in software quality research and practice.

References

[1]
Emad Shihab, Akinori Ihara, Yasutaka Kamei, Walid M. Ibrahim, Masao Ohira, Bram Adams, Ahmed E. Hassan, and Ken-ichi Matsumoto. Predicting re-opened bugs: A case study on the eclipse project. In Proceedings of the 17th Working Conference on Reverse Engineering, WCRE '10, pages 249--258, Washington, DC, USA, 2010. IEEE Computer Society.
[2]
L. Erlikh. Leveraging legacy system dollars for e-business. IT Professional, 2(3):17 --23, 2000.
[3]
Thomas Zimmermann, Rahul Premraj, and Andreas Zeller. Predicting defects for eclipse. In Proceedings of the Third International Workshop on Predictor Models in Software Engineering, PROMISE '07, pages 9--, Washington, DC, USA, 2007. IEEE Computer Society.
[4]
M. Cataldo, A. Mockus, J.A. Roberts, and J.D. Herbsleb. Software dependencies, work dependencies, and their impact on failures. Software Engineering, IEEE Transactions on, 35(6):864 --878, 2009.
[5]
Marco D'Ambros, Michele Lanza, and Romain Robbes. On the relationship between change coupling and software defects. In Proceedings of the 16th Working Conference on Reverse Engineering, WCRE '09, pages 135--144, Washington, DC, USA, 2009. IEEE Computer Society.
[6]
T.L. Graves, A.F. Karr, J.S. Marron, and H. Siy. Predicting fault incidence using software change history. Software Engineering, IEEE Transactions on, 26(7):653 --661, July 2000.
[7]
Raimund Moser, Witold Pedrycz, and Giancarlo Succi. A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. In Proceedings of the 30th international conference on Software engineering, ICSE '08, pages 181--190, New York, NY, USA, 2008. ACM.
[8]
Christian Bird, Nachiappan Nagappan, Premkumar Devanbu, Harald Gall, and Brendan Murphy. Does distributed development affect software quality? an empirical case study of windows vista. In Proceedings of the 31st International Conference on Software Engineering, ICSE '09, pages 518--528, Washington, DC, USA, 2009. IEEE Computer Society.
[9]
Nachiappan Nagappan, Brendan Murphy, and Victor Basili. The influence of organizational structure on software quality: an empirical case study. In Proceedings of the 30th international conference on Software engineering, ICSE '08, pages 521--530, New York, NY, USA, 2008. ACM.
[10]
Lucas D. Panjer. Predicting eclipse bug lifetimes. In Proceedings of the 4th International Workshop on Mining Software Repositories, MSR '07, pages 29--, Washington, DC, USA, 2007. IEEE Computer Society.
[11]
Cathrin Weiss, Rahul Premraj, Thomas Zimmermann, and Andreas Zeller. How long will it take to fix this bug? In Proceedings of the 4th International Workshop on Mining Software Repositories, MSR '07, pages 1--, Washington, DC, USA, 2007. IEEE Computer Society.
[12]
Sunghun Kim and E. James Whitehead, Jr. How long did it take to fix bugs? In Proceedings of the 3rd international workshop on Mining software repositories, MSR '06, pages 173--174, New York, NY, USA, 2006. ACM.
[13]
John Anvik, Lyndon Hiew, and Gail C. Murphy. Who should fix this bug? In Proceedings of the 28th international conference on Software engineering, ICSE '06, pages 361--370, New York, NY, USA, 2006. ACM.
[14]
Zhen Ming Jiang. Automated analysis of load testing results. In Proceedings of the 19th international symposium on Software testing and analysis, ISSTA '10, pages 143--146, New York, NY, USA, 2010. ACM.
[15]
Graham Kalton. Introduction to Survey Sampling. Sage Publications, Inc, 1983.
[16]
Mozilla foundation security advisory. http://www.mozilla.org/security/announce/, February 2011.
[17]
David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993--1022, March 2003.
[18]
Stephen W. Thomas, Bram Adams, Ahmed E. Hassan, and Dorothea Blostein. Validating the use of topic models for software evolution. In Proceedings of the 2010 10th IEEE Working Conference on Source Code Analysis and Manipulation, SCAM '10, pages 55--64, Washington, DC, USA, 2010. IEEE Computer Society.
[19]
Jonathan I. Maletic and Andrian Marcus. Supporting program comprehension using semantic and structural information. In Proceedings of the 23rd International Conference on Software Engineering, ICSE '01, pages 103--112, Washington, DC, USA, 2001. IEEE Computer Society.
[20]
Denys Poshyvanyk and Andrian Marcus. Combining formal concept analysis with information retrieval for concept location in source code. In Proceedings of the 15th IEEE International Conference on Program Comprehension, pages 37--48, Washington, DC, USA, 2007. IEEE Computer Society.
[21]
Adrian Kuhn, Stéphane Ducasse, and Tudor Gırba. Semantic clustering: Identifying topics in source code. Inf. Softw. Technol., 49:230--243, March 2007.
[22]
Pierre F. Baldi, Cristina V. Lopes, Erik J. Linstead, and Sushil K. Bajracharya. A theory of aspects as latent topics. In Proceedings of the 23rd ACM SIGPLAN conference on Object-oriented programmingsystems languages and applications, OOPSLA '08, pages 543--562, New York, NY, USA, 2008. ACM.
[23]
Scott C. Deerwester, Susan T. Dumais, Thomas K. Landauer, George W. Furnas, and Richard A. Harshman. Indexing by Latent Semantic Analysis. Journal of the American Society of Information Science, 41(6):391--407, 1990.
[24]
A. Hindle, M.W. Godfrey, and R.C. Holt. What's hot and what's not: Windowed developer topic analysis. In Proceedings of the 25th IEEE International Conference on Software Maintenance, ICSM '09, pages 339--348, 2009.
[25]
Firefox LDA outputs. http://www.cs.queensu.ca/~zaman/fx-lda/, February 2011.
[26]
Andreas Zeller. Why Programs Fail: A Guide to Systematic Debugging. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2005.
[27]
Gaeul Jeong, Sunghun Kim, and Thomas Zimmermann. Improving bug triage with bug tossing graphs. In Proceedings of the the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering, ESEC/FSE '09, pages 111--120, New York, NY, USA, 2009. ACM.
[28]
Stephan Neuhaus and Thomas Zimmermann. Security trend analysis with cve topic models. In Proceedings of the 21st International Symposium on Software Reliability Engineering, ISSRE '10, pages 111--120, Washington, DC, USA, 2010. IEEE Computer Society.
[29]
Mark S. Ackerman and Christine Halverson. Considering an organization's memory. In Proceedings of the 1998 ACM conference on Computer supported cooperative work, CSCW '98, pages 39--48, New York, NY, USA, 1998. ACM.
[30]
A. Mockus and J.D. Herbsleb. Expertise browser: a quantitative approach to identifying expertise. In Proceedings of the 24rd international conference on Software engineering, ICSE '02, pages 503 -- 512, 2002.
[31]
C. R. Reis and R. P. de Mattos Fortes. An overview of the software engineering process and tools in the mozilla project. In Proceedings of the Open Source Software Development Workshop, pages 155--175, 2002.
[32]
Christian Bird, Alex Gourley, and Prem Devanbu. Detecting patch submission and acceptance in oss projects. In Proceedings of the 4th International Workshop on Mining Software Repositories, MSR '07, pages 26--, Washington, DC, USA, 2007. IEEE Computer Society.
[33]
Christian Bird, Alex Gourley, Prem Devanbu, Michael Gertz, and Anand Swaminathan. Mining email social networks. In Proceedings of the 3rd international workshop on Mining software repositories, MSR '06, pages 137--143, New York, NY, USA, 2006. ACM.
[34]
Ahmed E. Hassan. Predicting faults using the complexity of code changes. In Proceedings of the 31st International Conference on Software Engineering, ICSE '09, pages 78--88, Washington, DC, USA, 2009. IEEE Computer Society.
[35]
Jacek Śliwerski, Thomas Zimmermann, and Andreas Zeller. When do changes induce fixes? SIGSOFT Softw. Eng. Notes, 30:1--5, May 2005.
[36]
Kai Pan, Sunghun Kim, and E. James Whitehead, Jr. Toward an understanding of bug fix patterns. Empirical Softw. Engg., 14:286--315, June 2009.
[37]
Draft standard for ieee standard classification for software anomalies. IEEE Unapproved Draft Std P1044/D00003, Feb 2009, 2009.
[38]
M. Hamill and K. Goseva-Popstojanova. Common trends in software fault and failure data. IEEE Transactions on Software Engineering, 35(4):484 --496, 2009.
[39]
Syed Nadeem Ahsan, Javed Ferzund, and Franz Wotawa. Automatic software bug triage system (bts) based on latent semantic indexing and support vector machine. In Proceedings of the 4th International Conference on Software Engineering Advances, ICSEA '09, pages 216--221, Washington, DC, USA, 2009. IEEE Computer Society.
[40]
M. Gegick, P. Rotella, and Tao Xie. Identifying security bug reports via text mining: An industrial case study. In Proceedings of the 7th international workshop on Mining software repositories, pages 11--20, May 2010.
[41]
Yonghee Shin and Laurie Williams. An empirical model to predict security vulnerabilities using code complexity metrics. In Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement, ESEM '08, pages 315--317, New York, NY, USA, 2008. ACM.
[42]
I. Chowdhury and M. Zulkernine. Using complexity, coupling, and cohesion metrics as early indicators of vulnerabilities. In Special Issue on Security and Dependability Assurance of Software Architectures, Journal of Systems Architecture, 2010.

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
MSR '11: Proceedings of the 8th Working Conference on Mining Software Repositories
May 2011
260 pages
ISBN:9781450305747
DOI:10.1145/1985441
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 May 2011

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. bugzilla
  2. empirical study
  3. firefox
  4. performance bugs
  5. security bugs

Qualifiers

  • Research-article

Conference

ICSE11
Sponsor:
ICSE11: International Conference on Software Engineering
May 21 - 22, 2011
HI, Waikiki, Honolulu, USA

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)79
  • Downloads (Last 6 weeks)5
Reflects downloads up to 28 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Enhancing Performance Bug Prediction Using Performance Code MetricsProceedings of the 21st International Conference on Mining Software Repositories10.1145/3643991.3644920(50-62)Online publication date: 15-Apr-2024
  • (2024)A New Method of Security Bug Reports AnalysisIT Professional10.1109/MITP.2023.329852026:2(49-56)Online publication date: 1-May-2024
  • (2024)Forward-porting and its limitations in fuzzer evaluationInformation Sciences: an International Journal10.1016/j.ins.2024.120142662:COnline publication date: 1-Mar-2024
  • (2024)On the acceptance by code reviewers of candidate security patches suggested by Automated Program Repair toolsEmpirical Software Engineering10.1007/s10664-024-10506-z29:5Online publication date: 3-Aug-2024
  • (2024)Vulnerability discovery based on source code patch commit mining: a systematic literature reviewInternational Journal of Information Security10.1007/s10207-023-00795-823:2(1513-1526)Online publication date: 6-Jan-2024
  • (2024)Mandelbug Classification Engine: Transfer Learning and NLP ApproachMachine Intelligence, Tools, and Applications10.1007/978-3-031-65392-6_3(29-39)Online publication date: 30-Jul-2024
  • (2023)A bug's lifeProceedings of the 32nd USENIX Conference on Security Symposium10.5555/3620237.3620443(3673-3690)Online publication date: 9-Aug-2023
  • (2023)A Taxonomy of Testable HTML5 Canvas IssuesIEEE Transactions on Software Engineering10.1109/TSE.2023.3270740(1-13)Online publication date: 2023
  • (2023)Commit Message Can Help: Security Patch Detection in Open Source Software via Transformer2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)10.1109/SERA57763.2023.10197730(345-351)Online publication date: 23-May-2023
  • (2023)Learning to Learn to Predict Performance Regressions in Production at Meta2023 IEEE/ACM International Conference on Automation of Software Test (AST)10.1109/AST58925.2023.00010(56-67)Online publication date: May-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media