Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3611643.3616242acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article
Open access

On the Relationship between Code Verifiability and Understandability

Published: 30 November 2023 Publication History

Abstract

Proponents of software verification have argued that simpler code is easier to verify: that is, that verification tools issue fewer false positives and require less human intervention when analyzing simpler code. We empirically validate this assumption by comparing the number of warnings produced by four state-of-the-art verification tools on 211 snippets of Java code with 20 metrics of code comprehensibility from human subjects in six prior studies. Our experiments, based on a statistical (meta-)analysis, show that, in aggregate, there is a small correlation (r = 0.23) between understandability and verifiability. The results support the claim that easy-to-verify code is often easier to understand than code that requires more effort to verify. Our work has implications for the users and designers of verification tools and for future attempts to automatically measure code comprehensibility: verification tools may have ancillary benefits to understandability, and measuring understandability may require reasoning about semantic, not just syntactic, code properties.

Supplementary Material

Video (fse23main-p5-p-video.mp4)
"Proponents of software verification have argued that simpler code is easier to verify: that is, that verification tools issue fewer false positives and require less human intervention when analyzing simpler code. We empirically validate this assumption by comparing the number of warnings produced by four state-of-the-art verification tools on 211 snippets of Java code with 20 metrics of code comprehensibility from human subjects in six prior studies. Our experiments, based on a statistical (meta-)analysis, show that, in aggregate, there is a small correlation (𝑟 = 0.23) between understandability and verifiability. The results support the claim that easy-to-verify code is often easier to understand than code that requires more effort to verify. Our work has implications for the users and designers of verification tools and for future attempts to automatically measure code comprehensibility: verification tools may have ancillary benefits to understandability, and measuring understandability may require reasoning about semantic, not just syntactic, code properties."

References

[1]
Amine Abbad-Andaloussi, Thierry Sorg, and Barbara Weber. 2022. Estimating Developers' Cognitive Load at a Fine-grained Level Using Eye-tracking Measures. In Intl. Conf. on Prog. Compr. (ICPC). 111-121.
[2]
Nahla J. Abid, Bonita Sharif, Natalia Dragan, Hend Alrasheed, and Jonathan I. Maletic. 2019. Developer Reading Behavior While Summarizing Java Methods: Size and Context Matters. In Intl. Conf. on Soft. Eng. (ICSE). 384-395.
[3]
Shulamyt Ajami, Yonatan Woodbridge, and Dror G. Feitelson. 2019. Syntax, predicates, idioms-what really affects code complexity? Emp. Soft. Eng. 24, 1 ( 2019 ), 287-328.
[4]
Erik Ammerlaan, Wim Veninga, and Andy Zaidman. 2015. Old habits die hard: Why refactoring for understandability does not give immediate benefits. In Intl. Conf. on Soft. Analysis, Evolution, and ReEng. (SANER). 504-507.
[5]
Vard Antinyan. 2020. Evaluating Essential and Accidental Code Complexity Triggers by Practitioners' Perception. IEEE Soft. 37, 6 ( 2020 ), 86-93.
[6]
Vard Antinyan, Miroslaw Staron, and Anna Sandberg. 2017. Evaluating code complexity triggers, use of complexity measures and the influence of code complexity on maintenance time. Emp. Soft. Eng. 22, 6 ( 2017 ), 3057-3087.
[7]
Nathaniel Ayewah, David Hovemeyer, J. David Morgenthaler, John Penix, and William Pugh. 2008. Using static analysis to find bugs. IEEE Software 25, 5 ( 2008 ), 22-29.
[8]
Moritz Beller, Radjino Bholanath, Shane McIntosh, and Andy Zaidman. 2016. Analyzing the State of Static Analysis: A Large-Scale Evaluation in Open Source Software. In Intl. Conf. on Soft. Analysis, Evolution, and ReEng. (SANER), Vol. 1. 470-481.
[9]
Dirk Beyer and Ashgan Fararooy. 2010. A Simple and Effective Measure for Complex Low-Level Dependencies. In Intl. Conf. on Prog. Compr. (ICPC). 80-83.
[10]
Dave Binkley, Marcia Davis, Dawn Lawrie, Jonathan I. Maletic, Christopher Morrell, and Bonita Sharif. 2013. The impact of identifier style on effort and comprehension. Emp. Soft. Eng. 18, 2 ( 2013 ), 219-276.
[11]
Michael Borenstein, Larry V. Hedges, Julian P. T. Higgins, and Hannah R. Rothstein. 2009. Introduction to Meta-Analysis. John Wiley & Sons.
[12]
Michael Borenstein, Larry V. Hedges, Julian P. T. Higgins, and Hannah R. Rothstein. 2009. Vote Counting-A New Name for an Old Problem. John Wiley & Sons, 251-255.
[13]
Jürgen Börstler and Barbara Paech. 2016. The role of method chains and comments in software readability and comprehension-An experiment. Trans. on Soft. Eng. (TSE) 42, 9 ( 2016 ), 886-898.
[14]
Frederick Brooks and H Kugler. 1987. No silver bullet. April.
[15]
Raymond Buse and Westley Weimer. 2009. Learning a metric for code readability. Trans. on Soft. Eng. (TSE) 36, 4 ( 2009 ), 546-558.
[16]
Cristiano Calcagno, Dino Distefano, Jérémy Dubreil, Dominik Gabi, Pieter Hooimeijer, Martino Luca, Peter O'Hearn, Irene Papakonstantinou, Jim Purbrick, and Dulma Rodriguez. 2015. Moving fast with software verification. In NASA Formal Methods Symp. Springer, 3-11.
[17]
Cristiano Calcagno, Dino Distefano, Peter O'Hearn, and Hongseok Yang. 2009. Compositional shape analysis by means of bi-abduction. In Principles of Programming Languages (POPL). 289-300.
[18]
G. Ann Campbell. 2018. Cognitive complexity: an overview and evaluation. In Intl. Conf. on Technical Debt. 57-58.
[19]
S.R. Chidamber and C.F. Kemerer. 1994. A metrics suite for object oriented design. Trans. on Soft. Eng. (TSE) 20, 6 ( 1994 ), 476-493.
[20]
Jacob Cohen, Patricia Cohen, Stephen G. West, and Leona S. Aiken. 2002. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences (3 ed.). Routledge.
[21]
B. Curtis, S.B. Sheppard, P. Milliman, M.A. Borst, and T. Love. 1979. Measuring the Psychological Complexity of Software Maintenance Tasks with the Halstead and McCabe Metrics. Trans. on Soft. Eng. (TSE) SE-5, 2 ( 1979 ), 96-104.
[22]
Luis Damas and Robin Milner. 1982. Principal type-schemes for functional programs. In Proceedings of the 9th ACM SIGPLAN-SIGACT symposium on Principles of programming languages. 207-212.
[23]
Leonardo De Moura and Nikolaj Bjørner. 2008. Z3: An efficient SMT solver. In TACAS 2008 : Tools and Algorithms for the Construction and Analysis of Systems (TACAS). Budapest, Hungary, 337-340.
[24]
Werner Dietl, Stephanie Dietzel, Michael D. Ernst, Kıvanç Muşlu, and Todd Schiller. 2011. Building and using pluggable type-checkers. In Intl. Conf. on Soft. Eng. (ICSE). Waikiki, Hawaii, USA, 681-690.
[25]
Michael Emmi, Liana Hadarean, Ranjit Jhala, Lee Pike, Nicolás Rosner, Martin Schäf, Aritra Sengupta, and Willem Visser. 2021. RAPID: checking API usage for the cloud in the cloud. In European Soft. Eng. Conf. and Symp. on the Found. of Soft. Eng. (ESEC/FSE). 1416-1426.
[26]
Janet Feigenspan, Sven Apel, Jorg Liebig, and Christian Kastner. 2011. Exploring Software Measures to Assess Program Comprehension. In Intl. Symp. on Emp. Soft. Eng. and Meas. (ESEM). 127-136.
[27]
Kobi Feldman, Martin Kellogg, and Oscar Chaparro. 2023. Online replication package. ( 2023 ). https://doi.org/10.5281/zenodo.8237328
[28]
Cormac Flanagan and K Rustan M Leino. 2001. Houdini, an annotation assistant for ESC/Java. In FME 2001: Formal Methods for Increasing Software Productivity : International Symposium of Formal Methods Europe Berlin, Germany, March 12-16, 2001 Proceedings. Springer, 500-517.
[29]
Cormac Flanagan, K Rustan M Leino, Mark Lillibridge, Greg Nelson, James B Saxe, and Raymie Stata. 2002. Extended static checking for Java. In Programming Language Design and Implementation (PLDI). 234-245.
[30]
Jeffrey S. Foster, Manuel Fähndrich, and Alexander Aiken. 1999. A theory of type qualifiers. In Conf. on Programming Language Design and Implementation (PLDI). Atlanta, GA, USA, 192-203.
[31]
Thomas Fritz, Andrew Begel, Sebastian C. Müller, Serap Yigit-Elliott, and Manuela Züger. 2014. Using psycho-physiological measures to assess task difficulty in software development. In Intl. Conf. on Soft. Eng. (ICSE). 402-413.
[32]
Thomas Fritz, Jingwen Ou, Gail C. Murphy, and Emerson Murphy-Hill. 2010. A degree-of-knowledge model to capture source code familiarity. In Intl. Conf. on Soft. Eng. (ICSE). 385-394.
[33]
Davide Fucci, Daniela Girardi, Nicole Novielli, Luigi Quaranta, and Filippo Lanubile. 2019. A Replication Study on Code Comprehension and Expertise using Lightweight Biometric Sensors. In Intl. Conf. on Prog. Compr. (ICPC). 311-322.
[34]
Javier García-Munoz, Marisol García-Valls, and Julio Escribano-Barreno. 2016. Improved Metrics Handling in SonarQube for Software Quality Monitoring. In Intl. Conf. on Distributed Comp. and Art. Intel. 463-470.
[35]
Dan Gopstein, Anne-Laure Fayard, Sven Apel, and Justin Cappos. 2020. Thinking aloud about confusing code: a qualitative investigation of program comprehension and atoms of confusion. In European Soft. Eng. Conf. and Symp. on the Found. of Soft. Eng. (ESEC/FSE). 605-616.
[36]
Maurice H. Halstead. 1977. Elements of Soft. Science. Elsevier.
[37]
Mathias Harrer, Pim Cuijpers, Furukawa Toshi A, and David D Ebert. 2021. Doing Meta-Analysis With R: A Hands-On Guide (1st ed.). Chapman & Hall/CRC Press, Boca Raton, FL and London.
[38]
Klaus Havelund and Thomas Pressburger. 2000. Model checking java programs using java pathfinder. Intl. Jour. on Soft. Tools for Technology Transfer 2, 4 ( 2000 ), 366-381.
[39]
Brian Henderson-Sellers. 1995. Object-oriented metrics: measures of complexity. Prentice-Hall, Inc.
[40]
Sture Holm. 1979. A simple sequentially rejective multiple test procedure. Scandinavian journal of statistics ( 1979 ), 65-70.
[41]
Ahmad Jbara and Dror G. Feitelson. 2017. How programmers read regular code: a controlled experiment using eye tracking. Emp. Soft. Eng. 22, 3 ( 2017 ), 1440-1477.
[42]
Brittany Johnson, Yoonki Song, Emerson Murphy-Hill, and Robert Bowdidge. 2013. Why don't software developers use static analysis tools to find bugs?. In Intl. Conf. on Soft. Eng. (ICSE). 672-681.
[43]
John Johnson, Sergio Lubo, Nishitha Yedla, Jairo Aponte, and Bonita Sharif. 2019. An Empirical Study Assessing Source Code Readability in Comprehension. In Intl. Conf. on Soft. Maint. and Evol. (ICSME). 513-523.
[44]
Temesghen Kahsai, Philipp Rümmer, Huascar Sanchez, and Martin Schäf. 2016. JayHorn: A framework for verifying Java programs. In Intl. Conf. on Computer Aided Verification (CAV). Springer, 352-358.
[45]
Cem Kaner, Senior Member, and Walter P. Bond. 2004. Software Engineering Metrics: What Do They Measure and How Do We Know?. In Intl. Soft. Metrics Symp. (METRICS).
[46]
Martin Kellogg, Vlastimil Dort, Suzanne Millstein, and Michael D. Ernst. 2018. Lightweight verification of array indexing. In Intl. Symp. on Soft. Testing and Analysis (ISSTA). 3-14.
[47]
Martin Kellogg, Manli Ran, Manu Sridharan, Martin Schäf, and Michael D. Ernst. 2020. Verifying Object Construction. In Intl. Conf. on Soft. Eng. (ICSE). 1447-1458.
[48]
Martin Kellogg, Narges Shadab, Manu Sridharan, and Michael D. Ernst. 2021. Lightweight and modular resource leak verification. In European Soft. Eng. Conf. and Symp. on the Found. of Soft. Eng. (ESEC/FSE).
[49]
Maurice G. Kendall. 1938. A new measure of rank correlation. Biometrika 30, 1 /2 ( 1938 ), 81-93.
[50]
Steve Klabnik and Carol Nichols. 2018. The Rust Programming Language. https: //doc.rust-lang. org/1.50.0/book/
[51]
Stefan Krüger, Johannes Späth, Karim Ali, Eric Bodden, and Mira Mezini. 2018. CrySL: An extensible approach to validating the correct usage of cryptographic APIs. In European Conf. on Object-Oriented Programming (ECOOP). Amsterdam, Netherlands, 10 : 1-10 : 27.
[52]
Chris Langhout and Maurício Aniche. 2021. Atoms of Confusion in Java. In Intl. Conf. on Prog. Compr. (ICPC). 25-35.
[53]
Thomas D. LaToza, David Garlan, James D. Herbsleb, and Brad A. Myers. 2007. Program comprehension as fact finding. In European Soft. Eng. Conf. and the Symp. on on the Found. of Soft. Eng. (ESEC/FSE). 361-370.
[54]
Gary T Leavens, Albert L Baker, and Clyde Ruby. 1998. JML: a Java modeling language. In Formal Underpinnings of Java Workshop (at OOPSLA 1998 ). Citeseer, 404-420.
[55]
Benjamin Livshits, Manu Sridharan, Yannis Smaragdakis, Ondřej Lhoták, J. Nelson Amaral, Bor-Yuh Evan Chang, Samuel Z. Guyer, Uday P. Khedker, Anders Møller, and Dimitrios Vardoulakis. 2015. In defense of soundiness: A manifesto. Commun. ACM 58, 2 ( 2015 ), 44-46.
[56]
Walid Maalej, Rebecca Tiarks, Tobias Roehm, and Rainer Koschke. 2014. On the Comprehension of Program Comprehension. Trans. on Soft. Eng. and Methodology (TSEM) 23, 4 ( 2014 ), 1-37.
[57]
Niloofar Mansoor, Tukaram Muske, Alexander Serebrenik, and Bonita Sharif. 2022. An Empirical Assessment of Repositioning of Static Analysis Alarms. In Intl. Working Conf. on Source Code Analysis & Manipulation.
[58]
T.J. McCabe. 1976. A Complexity Measure. Trans. on Soft. Eng. (TSE) SE-2, 4 ( 1976 ), 308-320.
[59]
Roberto Minelli, Andrea Mocci, and Michele Lanza. 2015. I Know What You Did Last Summer-An Investigation of How Developers Spend Their Time. In Intl. Conf. on Prog. Compr. (ICPC). 25-35.
[60]
João Mota, Marco Giunti, and António Ravara. 2021. Java typestate checker. In Intl. Conf. on Coord. Lang. and Models. Springer, 121-133.
[61]
Jacqueline Murray. 2013. Likert data: what to use, parametric or non-parametric? Intl. Jour. of Business and Social Science 4, 11 ( 2013 ).
[62]
Marvin Muñoz Barón, Marvin Wyrich, and Stefan Wagner. 2020. An Empirical Validation of Cognitive Complexity as a Measure of Source Code Understandability. In Intl. Symp. on Emp. Soft. Eng. and Meas. (ESEM). 1-12.
[63]
Marcus Nachtigall, Michael Schlichtig, and Eric Bodden. 2022. A large-scale study of usability criteria addressed by static analysis tools. In Intl. Symp. on Soft. Testing and Analysis (ISSTA). 532-543.
[64]
Alberto S. Nuñez-Varela, Héctor G. Pérez-Gonzalez, Francisco E. Martínez-Perez, and Carlos Soubervielle-Montalvo. 2017. Source code metrics: A systematic mapping study. Jour. of Sys. and Soft. 128 ( 2017 ), 164-197.
[65]
Peter O'Hearn, John Reynolds, and Hongseok Yang. 2001. Local reasoning about programs that alter data structures. In Intl. Workshop on Computer Science Logic. Springer, 1-19.
[66]
Delano Oliveira, Reydne Bruno, Fernanda Madeiral, and Fernando Castor. 2020. Evaluating Code Readability and Legibility: An Examination of Human-centric Studies. In Intl. Conf. on Soft. Maint. and Evol. (ICSME). 348-359.
[67]
OpenJML Developers. 2022. OpenJML-formal methods tool for Java and the Java Modeling Language (JML). https://www.openjml.org/documentation/introduction.html.
[68]
Matthew M. Papi, Mahmood Ali, Telmo Luis Correa Jr., Jeff H. Perkins, and Michael D. Ernst. 2008. Practical pluggable types for Java. In Intl. Symp. on Soft. Testing and Analysis (ISSTA). Seattle, WA, USA, 201-212.
[69]
Norman Peitek, Sven Apel, Chris Parnin, André Brechmann, and Janet Siegmund. 2021. Program comprehension and code complexity metrics: An fMRI study. In Intl. Conf. on Soft. Eng. (ICSE). 524-536.
[70]
Norman Peitek, Janet Siegmund, and Sven Apel. 2020. What Drives the Reading Order of Programmers? An Eye Tracking Study. In Intl. Conf. on Prog. Compr. (ICPC). 342-353.
[71]
Norman Peitek, Janet Siegmund, Sven Apel, Christian Kästner, Chris Parnin, Anja Bethmann, Thomas Leich, Gunter Saake, and André Brechmann. 2018. A look into programmers' heads. Trans. on Soft. Eng. (TSE) 46, 4 ( 2018 ), 442-462.
[72]
Valentina Piantadosi, Fabiana Fierro, Simone Scalabrino, Alexander Serebrenik, and Rocco Oliveto. 2020. How does code readability change during software evolution? Emp. Soft. Eng. 25, 6 ( 2020 ), 5374-5412.
[73]
James E Pustejovsky and Elizabeth Tipton. 2022. Meta-analysis with robust variance estimation: Expanding the range of working models. Prevention Science 23, 3 ( 2022 ), 425-438.
[74]
Henry Gordon Rice. 1953. Classes of recursively enumerable sets and their decision problems. Trans. of the American Mathematical Society 74, 2 ( 1953 ), 358-366.
[75]
Nick Rutar, Christian B. Almazan, and Jeffrey S. Foster. 2004. A comparison of bug finding tools for Java. In Intl. Symp. on Soft. Reliab. Eng. 245-256.
[76]
Rubén Saborido, Javier Ferrer, Francisco Chicano, and Enrique Alba. 2022. Automatizing Software Cognitive Complexity Reduction. IEEE Access 10 ( 2022 ), 11642-11656.
[77]
Caitlin Sadowski, Jeffrey Van Gogh, Ciera Jaspan, Emma Soderberg, and Collin Winter. 2015. Tricorder: Building a program analysis ecosystem. In Intl. Conf. on Soft. Eng. (ICSE), Vol. 1. 598-608.
[78]
Simone Scalabrino, Gabriele Bavota, Christopher Vendome, Mario LinaresVasquez, Denys Poshyvanyk, and Rocco Oliveto. 2019. Automatically assessing code understandability. Trans. on Soft. Eng. (TSE) 47, 3 ( 2019 ), 595-613.
[79]
Martin Schäf and Philipp Rümmer. 2022. personal communication.
[80]
Kurex Sidik and Jeffrey N. Jonkman. 2005. Simple heterogeneity variance estimation for meta-analysis. Jour. of the Royal Statistical Society : Series C (Applied Statistics) 54, 2 ( 2005 ), 367-384.
[81]
Janet Siegmund. 2016. Program Comprehension: Past, Present, and Future. In Intl. Conf. on Soft. Analysis, Evolution, and ReEng. (SANER), Vol. 5. 13-20.
[82]
Janet Siegmund, Christian Kästner, Sven Apel, Chris Parnin, Anja Bethmann, Thomas Leich, Gunter Saake, and André Brechmann. 2014. Understanding understanding source code with functional magnetic resonance imaging. In Intl. Conf. on Soft. Eng. (ICSE). 378-389.
[83]
Janet Siegmund, Norman Peitek, Chris Parnin, Sven Apel, Johannes Hofmeister, Christian Kästner, Andrew Begel, Anja Bethmann, and André Brechmann. 2017. Measuring neural efficiency of program comprehension. In European Soft. Eng. Conf. and Symp. on Found. of Soft. Eng. (ESEC/FSE'17). 140-150.
[84]
Justin Smith, Lisa Nguyen Quang Do, and Emerson Murphy-Hill. 2020. Why Can't Johnny Fix Vulnerabilities: A Usability Evaluation of Static Analysis Tools for Security. In Symp. on Usable Privacy and Security (SOUPS). 221-238.
[85]
Harry M. Sneed. 1995. Understanding software through numbers: A metric based approach to program comprehension. Jour. of Soft. Maint.: Research and Practice 7, 6 ( 1995 ), 405-419.
[86]
Eric Spishak, Werner Dietl, and Michael D. Ernst. 2012. A type system for regular expressions. In FTfJP: 14th Workshop on Formal Techniques for Java-like Programs. Beijing, China, 20-26.
[87]
M. A. D. Storey, K. Wong, and H. A. Müller. 2000. How do program understanding tools affect how programmers understand programs ? Science of Computer Programming 36, 2 ( 2000 ), 183-207.
[88]
Robert E. Strom and Shaula Yemini. 1986. Typestate: A programming language concept for enhancing software reliability. IEEE Transactions on Software Sngineering SE-12, 1 (Jan. 1986 ), 157-171.
[89]
Amjed Tahir and Stephen G. MacDonell. 2012. A systematic mapping study on dynamic metrics and software quality. In Intl. Conf. on Soft. Maint. (ICSM). 326-335.
[90]
Yida Tao, Yingnong Dang, Tao Xie, Dongmei Zhang, and Sunghun Kim. 2012. How do software engineers understand code changes? an exploratory study in industry. In Symp. on the Found. of Soft. Eng. (FSE). 1-11.
[91]
The Checker Framework Developers. 2022. 2.4.5 What to do if a checker issues a warning about your code. https://checkerframework.org/manual/#handlingwarnings.
[92]
The Checker Framework Developers. 2022. Optional Checker for possiblypresent data. https://tinyurl.com/3surnw4a.
[93]
The OpenJML Developers. 2022. OpenJML. https://www.openjml.org/.
[94]
Asher Trockman, Keenen Cates, Mark Mozina, Tuan Nguyen, Christian Kästner, and Bogdan Vasilescu. 2018. "Automatically assessing code understandability" reanalyzed: combined metrics matter. In Intl. Conf. on Mining Soft. Repositories (MSR). 314-318.
[95]
Rachel Turner, Michael Falcone, Bonita Sharif, and Alina Lazar. 2014. An eyetracking study assessing the comprehension of c++ and Python source code. In Symp. on Eye Tracking Research and Applications. 231-234.
[96]
Mohsen Vakilian, Amarin Phaosawasdi, Michael D Ernst, and Ralph E Johnson. 2015. Cascade: A universal programmer-assisted type qualifier inference tool. In 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Vol. 1. IEEE, 234-245.
[97]
Anne M. van Valkengoed and Linda Steg. 2019. Meta-analyses of factors motivating climate change adaptation behaviour. Nature Climate Change ( 2019 ), 158-163.
[98]
Carmine Vassallo, Sebastiano Panichella, Fabio Palomba, Sebastian Proksch, Harald C. Gall, and Andy Zaidman. 2020. How developers engage with static analysis tools in different contexts. Emp. Soft. Eng. 25, 2 ( 2020 ), 1419-1457.
[99]
Wolfgang Viechtbauer. 2010. Conducting meta-analyses in R with the metafor package. Journal of Statistical Software 36, 3 ( 2010 ), 1-48. https://doi.org/10. 18637/jss.v036.i03
[100]
David Walker. 2003. JMASM9: Converting Kendall's Tau For Correlational Or Meta-Analytic Analyses. Jour. of M. A. Stat. Meth. 2, 2 ( 2003 ).
[101]
Konstantin Weitz, Gene Kim, Siwakorn Srisakaokul, and Michael D. Ernst. 2014. A type system for format strings. In Intl. Symp. on Soft. Testing and Analysis (ISSTA). 127-137.
[102]
Eliane S. Wiese, Anna N. Rafferty, and Armando Fox. 2019. Linking Code Readability, Structure, and Comprehension Among Novices: It's Complicated. In Intl. Conf. on Soft. Eng. (ICSE). 84-94.
[103]
Marvin Wyrich, Andreas Preikschat, Daniel Graziotin, and Stefan Wagner. 2021. The Mind Is a Powerful Place: How Showing Code Comprehensibility Metrics Influences Code Understanding. In Intl. Conf. on Soft. Eng. (ICSE). 512-523.
[104]
Xin Xia, Lingfeng Bao, David Lo, Zhenchang Xing, Ahmed E. Hassan, and Shanping Li. 2018. Measuring Program Comprehension: A Large-Scale Field Study with Professionals. Trans. on Soft. Eng. (TSE) 44, 10 ( 2018 ), 951-976.
[105]
Martin K.-C. Yeh, Dan Gopstein, Yu Yan, and Yanyan Zhuang. 2017. Detecting and comparing brain activity in short program comprehension using EEG. In Frontiers in Education Conf. (FIE). 1-5.
[106]
H. Zuse. 1993. Criteria for program comprehension derived from software complexity metrics. In Workshop on Prog. Compr. 8-16.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ESEC/FSE 2023: Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering
November 2023
2215 pages
ISBN:9798400703270
DOI:10.1145/3611643
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 November 2023

Permissions

Request permissions for this article.

Check for updates

Badges

Author Tags

  1. Verification
  2. code comprehension
  3. meta-analysis
  4. static analysis

Qualifiers

  • Research-article

Conference

ESEC/FSE '23
Sponsor:

Acceptance Rates

Overall Acceptance Rate 112 of 543 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 298
    Total Downloads
  • Downloads (Last 12 months)298
  • Downloads (Last 6 weeks)41
Reflects downloads up to 26 Nov 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media