Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3283812.3283820acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
short-paper

Towards understanding code readability and its impact on design quality

Published: 04 November 2018 Publication History

Abstract

Readability of code is commonly believed to impact the overall quality of software. Poor readability not only hinders developers from understanding what the code is doing but also can cause developers to make sub-optimal changes and introduce bugs. Developers also recognize this risk and state readability among their top information needs. Researchers have modeled readability scores. However, thus far, no one has investigated how readability evolves over time and how that impacts design quality of software. We perform a large scale study of 49 open source Java projects, spanning 8296 commits and 1766 files. We find that readability is high in open source projects and does not fluctuate over project’s lifetime unlike design quality of a project. Also readability has a non-significant correlation of 0.151 (Kendall’s τ ) with code smell count (indicator of design quality). Since current readability measure is unable to capture the increased difficulty in reading code due to the degraded design quality, our results hint towards the need of a better measurement and modeling of code readability.

References

[1]
{n. d.}. Cloc tool. https://github.com/AlDanial/cloc/. Accessed: 2017-03-1.
[2]
{n. d.}. Entropy library. https://pypi.python.org/pypi/entropy/0.9/. Accessed: 2017-03-1.
[3]
{n. d.}. InFusion. http://www.intooitus.com/inFusion.html. Accessed: 2014-01-01. 2017. effsize library. https://cran.r-project.org/web/packages/effsize/effsize.pdf. Accessed: 2017-04-19.
[4]
Krishan K Aggarwal, Yogesh Singh, and Jitender Kumar Chhabra. 2002. An integrated measure of software maintainability. In Reliability and maintainability symposium, 2002. Proceedings. Annual. IEEE, 235–241.
[5]
Iftekhar Ahmed, Umme Ayda Mannan, Rahul Gopinath, and Carlos Jensen. 2015. An empirical study of design degradation: How software projects get worse over time. In Empirical Software Engineering and Measurement (ESEM), 2015 ACM/IEEE International Symposium on. IEEE, 1–10.
[6]
Ronald M Baecker and Aaron Marcus. 1989. Human factors and typography for more readable programs. ACM.
[7]
Rajiv D. Banker, Srikant M. Datar, Chris F. Kemerer, and Dani Zweig. 1993. Software Complexity and Maintenance Costs. Commun. ACM 36, 11 (Nov. 1993), 81–94.
[8]
Raymond PL Buse and Westley R Weimer. 2008. A metric for software readability. In Proceedings of the 2008 international symposium on Software testing and analysis. ACM, 121–130.
[9]
Raymond P.L. Buse and Thomas Zimmermann. 2011. Information Needs for Software Development Analytics. Technical Report MSR-TR-2011-8. Microsoft Corporation.
[10]
Raymond PL Buse and Thomas Zimmermann. 2012. Information needs for software development analytics. In Proceedings of the 34th international conference on software engineering. IEEE Press, 987–996.
[11]
Simon Butler, Michel Wermelinger, Yijun Yu, and Helen Sharp. 2010. Exploring the influence of identifier names on code quality: An empirical study. In Software Maintenance and Reengineering (CSMR), 2010 14th European Conference on. IEEE, 156–165.
[12]
Ermira Daka, José Campos, Gordon Fraser, Jonathan Dorn, and Westley Weimer. 2015. Modeling readability to improve unit tests. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering. ACM, 107–118.
[13]
Eric Enslen, Emily Hill, Lori Pollock, and K Vijay-Shanker. 2009. Mining source code to automatically split identifiers for software analysis. In Mining Software Repositories, 2009. MSR’09. 6th IEEE International Working Conference on. IEEE, 71–80.
[14]
Vincenzo Ferme, Alessandro Marino, and F Arcelli Fontana. 2013. Is it a Real Code Smell to be Removed or not?. In International Workshop on Refactoring & Testing (Ref Test), co-located event with XP 2013 Conference.
[15]
Francesca Arcelli Fontana, Elia Mariani, Andrea Mornioli, Raul Sormani, and Alberto Tonello. 2011. An experience report on using code smells detection tools. In Software Testing, Verification and Validation Workshops (ICSTW), 2011 IEEE Fourth International Conference on. IEEE, 450–457.
[16]
Francesca Arcelli Fontana and Marco Zanoni. 2011. On investigating code smells correlations. In Software Testing, Verification and Validation Workshops (ICSTW), 2011 IEEE Fourth International Conference on. IEEE, 474–475.
[17]
Martin Fowler and Kent Beck. 1999. Refactoring: improving the design of existing code. Addison-Wesley Professional.
[18]
Tracy Hall, Min Zhang, David Bowes, and Yi Sun. 2014. Some code smells have a significant but small effect on faults. ACM Transactions on Software Engineering and Methodology (TOSEM) 23, 4 (2014), 33.
[19]
Maurice Howard Halstead. 1977. Elements of software science. Vol. 7. Elsevier New York.
[20]
Mario Hozano, Henrique Ferreira, Italo Silva, Baldoino Fonseca, and Evandro Costa. 2015. Using developers’ feedback to improve code smell detection. In Proceedings of the 30th Annual ACM Symposium on Applied Computing. ACM, 1661–1663.
[21]
Clemente Izurieta and James M Bieman. 2007. How software designs decay: A pilot study of pattern evolution. In Empirical Software Engineering and Measurement, 2007. ESEM 2007. First International Symposium on. IEEE, 449–451.
[22]
Clemente Izurieta and James M Bieman. 2008. Testing consequences of grime buildup in object oriented design patterns. In Software Testing, Verification, and Validation, 2008 1st International Conference on. IEEE, 171–179.
[23]
Eirini Kalliamvakou, Georgios Gousios, Kelly Blincoe, Leif Singer, Daniel M German, and Daniela Damian. 2014. The promises and perils of mining GitHub. In Proceedings of the 11th working conference on mining software repositories. ACM, 92–101.
[24]
Foutse Khomh, Massimiliano Di Penta, Yann-Gaël Guéhéneuc, and Giuliano Antoniol. 2012. An exploratory study of the impact of antipatterns on class change-and fault-proneness. Empirical Software Engineering 17, 3 (2012), 243– 275.
[25]
Philippe Kruchten, Robert L Nord, and Ipek Ozkaya. 2012. Technical debt: From metaphor to theory and practice. Ieee software 29, 6 (2012), 18–21.
[26]
Dawn Lawrie, Henry Feild, and David Binkley. 2006. Syntactic identifier conciseness and consistency. In Source Code Analysis and Manipulation, 2006. SCAM’06. Sixth IEEE International Workshop on. IEEE, 139–148.
[27]
Dawn Lawrie, Christopher Morrell, Henry Feild, and David Binkley. 2007. Effective identifier names for comprehension and memory. Innovations in Systems and Software Engineering 3, 4 (2007), 303–318.
[28]
Wei Li and Raed Shatnawi. 2007. An empirical study of the bad smells and class error probability in the post-release object-oriented system evolution. Journal of systems and software 80, 7 (2007), 1120–1128.
[29]
Steffen Olbrich, Daniela S Cruzes, Victor Basili, and Nico Zazworka. 2009. The evolution and impact of code smells: A case study of two open source systems. In Proceedings of the 2009 3rd international symposium on empirical software engineering and measurement. IEEE Computer Society, 390–400.
[30]
Fabio Palomba, Gabriele Bavota, Massimiliano Di Penta, Rocco Oliveto, Andrea De Lucia, and Denys Poshyvanyk. 2013. Detecting bad smells in source code using change history information. In Automated software engineering (ASE), 2013 IEEE/ACM 28th international conference on. IEEE, 268–278.
[31]
Daryl Posnett, Abram Hindle, and Premkumar Devanbu. 2011. A simpler model of software readability. In Proceedings of the 8th working conference on mining software repositories. ACM, 73–82.
[32]
TIOBE. {n. d.}. TIOBE Index. http://www.tiobe.com/index.php/content/paperinfo/ tpci/index.html.
[33]
Aiko Fallas Yamashita and Leon Moonen. 2013. Do developers care about code smells? An exploratory survey. In WCRE, Vol. 13. 242–251.
[34]
Zhifeng Yu and Václav Rajlich. 2001. Hidden dependencies in program comprehension and change propagation. In Program Comprehension, 2001. IWPC 2001. Proceedings. 9th International Workshop on. IEEE, 293–299.
[35]
Nico Zazworka, Michele A Shaw, Forrest Shull, and Carolyn Seaman. 2011. Investigating the impact of design debt on software quality. In Proceedings of the 2nd Workshop on Managing Technical Debt. ACM, 17–23. Abstract 1 Introduction 2 Related Work 3 Methodology 4 Results 5 Discussion 6 Threats to Validity 7 Conclusions Acknowledgments References

Cited By

View all
  • (2024)R2I: A Relative Readability Metric for Decompiled CodeProceedings of the ACM on Software Engineering10.1145/36437441:FSE(383-405)Online publication date: 12-Jul-2024
  • (2024)An eye tracking study assessing source code readability rules for program comprehensionEmpirical Software Engineering10.1007/s10664-024-10532-x29:6Online publication date: 5-Oct-2024
  • (2024)Free open source communities sustainability: Does it make a difference in software quality?Empirical Software Engineering10.1007/s10664-024-10529-629:5Online publication date: 23-Jul-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
NL4SE 2018: Proceedings of the 4th ACM SIGSOFT International Workshop on NLP for Software Engineering
November 2018
41 pages
ISBN:9781450360555
DOI:10.1145/3283812
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 November 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Code smell
  2. Design quality
  3. Readability

Qualifiers

  • Short-paper

Conference

ESEC/FSE '18
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)51
  • Downloads (Last 6 weeks)2
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)R2I: A Relative Readability Metric for Decompiled CodeProceedings of the ACM on Software Engineering10.1145/36437441:FSE(383-405)Online publication date: 12-Jul-2024
  • (2024)An eye tracking study assessing source code readability rules for program comprehensionEmpirical Software Engineering10.1007/s10664-024-10532-x29:6Online publication date: 5-Oct-2024
  • (2024)Free open source communities sustainability: Does it make a difference in software quality?Empirical Software Engineering10.1007/s10664-024-10529-629:5Online publication date: 23-Jul-2024
  • (2024)Causal inference of server- and client-side code smells in web apps evolutionEmpirical Software Engineering10.1007/s10664-024-10478-029:5Online publication date: 5-Aug-2024
  • (2023)A graph-based code representation method to improve code readability classificationEmpirical Software Engineering10.1007/s10664-023-10319-628:4Online publication date: 23-May-2023
  • (2021)Method to Address Complexity in Organizations Based on a Comprehensive OverviewInformation10.3390/info1210042312:10(423)Online publication date: 16-Oct-2021
  • (2021)Multilevel Readability Interpretation Against Software Properties: A Data-Centric ApproachSoftware Technologies10.1007/978-3-030-83007-6_10(203-226)Online publication date: 21-Jul-2021

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media