Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3364641.3364658acmotherconferencesArticle/Chapter ViewAbstractPublication PagessbqsConference Proceedingsconference-collections
research-article

Using auxiliary artifacts during code inspection activity: findings from an exploratory study

Published: 28 October 2019 Publication History

Abstract

Code inspection is an important activity to identify defects in the source code and improve software quality. However, even when using techniques such as checklists, inspectors consider implicit decision-making knowledge. In this paper, we perform an exploratory study with groups of inspectors (Group1 and Group2) with two objectives: 1) to present findings on how (and if) auxiliary artifacts interfere in decision making during the code inspection activity, and 2) to show whether there is any influence on the number of defects identified by inspectors when using or not auxiliary artifacts. Both groups used the computational code inspection support of the CRISTA tool, but only Group1 used auxiliary artifacts (requirements, UML diagrams, software metrics). We identified 10 findings. All of them are related to the inspectors' decision making and the influence of using artifacts on defects identification. The findings suggested that when inspectors use auxiliary artifacts, their effectiveness in identifying defects is improved. Besides, their decision making is more homogeneous than that of inspectors who do not use auxiliary artifacts. However, more investigations are necessary to make the results more generalizable. As future work, different strategies for code inspection techniques can be defined based on the findings.

References

[1]
de Almeida, Camargo, B. Basseto, and S. Paz. 2003. Best practices in code inspection for safety-critical software. Software, IEEE 20 (06 2003), 56- 63.
[2]
T. Baum, K. Schneider, and A. Bacchelli. 2017. On the Optimal Order of Reading Source Code Changes for Review. In 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME). 329--340.
[3]
F. Belli and R. Crisan. 1996. Towards automation of checklist-based code-reviews. In Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering. 24--33
[4]
F. Belli and R. Crisan. 1997. Empirical performance analysis of computer-supported code-reviews. In Proceedings The Eighth International Symposium on Software Reliability Engineering. 245--255.
[5]
M. Bernhart, S. Reiterer, K. Matt, A. Mauczka, and T. Grechenig. 2011. A Task-Based Code Review Process and Tool to Comply with the DO-278/ED-109 Standard for Air Traffic Managment Software Development: An Industrial Case Study. In 2011 IEEE 13th International Symposium on High-Assurance Systems Engineering. 182--187.
[6]
Mario Bernhart, Stefan Strobl, Andreas Mauczka, and Thomas Grechenig. 2012. Applying Continuous Code Reviews in Airport Operations Software. In Proceedings of the 2012 12th International Conference on Quality Software (QSIC '12). IEEE Computer Society, Washington, DC, USA, 214--219.
[7]
Steve Counsell, Stephen Swift, and Allan Tucker. 2005. Object-oriented Cohesion As a Surrogate of Software Comprehension: An Empirical Study. In Proceedings of the Fifth IEEE International Workshop on Source Code Analysis and Manipulation (SCAM '05). IEEE Computer Society, Washington, DC, USA, 161--172.
[8]
Kostadin Damevski, David Shepherd, and Lori Pollock. 2016. A Field Study of How Developers Locate Features in Source Code. Empirical Softw. Engg. 21, 2 (April 2016), 724--747.
[9]
Alastair Dunsmore. 2002. Investigating Effective Inspection of Object-oriented Code. (10 2002).
[10]
A. Dunsmore, M. Roper, and M. Wood. 2000. Object-oriented inspection in the face of delocalisation. In Proceedings of the 2000 International Conference on Software Engineering. ICSE 2000 the New Millennium. 467--476.
[11]
Alastair Dunsmore, Marc Roper, and Murray Wood. 2003. Practical Code Inspection Techniques for Object-Oriented Systems: An Experimental Comparison. IEEE Softw. 20, 4 (July 2003), 21--29.
[12]
M. E. Fagan. 1976. Design and Code Inspections to Reduce Errors in Program Development. IBM Syst. J. 15, 3 (Sept. 1976), 182--211.
[13]
M. S. Fisher and B. Cukic. 2001. Automating Techniques for Inspecting High Assurance Systems. In The 6th IEEE International Symposium on High-Assurance Systems Engineering: Special Topic: Impact of Networking (HASE '01). IEEE Computer Society, Washington, DC, USA, 0117--.
[14]
L. Hatton. 2008. Testing the Value of Checklists in Code Inspections. IEEE Software 25, 4 (July 2008), 82--88.
[15]
K. Havelund and G. J. Holzmann. 2011. Software certification - coding, code, and coders. In 2011 Proceedings of the Ninth ACM International Conference on Embedded Software (EMSOFT). 205--210.
[16]
Martin Höst and Conny Johansson. 2000. Evaluation of Code Review Methods Through Interviews and Experimentation. J. Syst. Softw. 52, 2 (June 2000), 113--120.
[17]
Brian Johnson and Ben Shneiderman. 1991. Treemaps: a space-filling approach to the visualization of hierarchical information structures. In PROC. 2ND INTERNATIONAL VISUALIZATION CONFERENCE 1991. IEEE. 284--291.
[18]
Diane Kelly and Terry Shepard. 2002. Qualitative observations from software code inspection experiments. 5.
[19]
Andrew J. Ko, Brad A. Myers, Michael J. Coblenz, and Htet Htet Aung. 2006. An Exploratory Study of How Developers Seek, Relate, and Collect Relevant Information During Software Maintenance Tasks. IEEE Trans. Softw. Eng. 32, 12 (Dec. 2006), 971--987.
[20]
S. C. Kothari, Luke Bishop, Jeremias Sauceda, and Gary Daugherty. 2004. A Pattern-Based Framework for Software Anomaly Detection. Software Quality Journal 12, 2 (June 2004), 99--120.
[21]
Oliver Laitenberger, Khaled El Emam, and Thomas G. Harbich. 2001. An Internally Replicated Quasi-Experimental Comparison of Checklist and Perspective-Based Reading of Code Documents. IEEE Trans. Softw. Eng. 27, 5 (May 2001), 387--421.
[22]
Xiaolin Li. 1995. A Comparison-based Approach for Software Inspection. In Proceedings of the 1995 Conference of the Centre for Advanced Studies on Collaborative Research (CASCON '95). IBM Press, 41--.
[23]
Andrea Lucia, Fausto Fasano, Genoveffa Tortora, and Giuseppe Scanniello. 2007. Assessing the Effectiveness of a Distributed Method for Code Inspection: A Controlled Experiment. Proceedings - International Conference on Global Software Engineering, ICGSE 2007, 252 -- 261.
[24]
L. MacLeod, M. Greiler, M. Storey, C. Bird, and J. Czerwonka. 2018. Code Reviewing in the Trenches: Challenges and Best Practices. IEEE Software 35, 4 (July 2018), 34--42.
[25]
David Mcmeekin, Brian von Konsky, Elizabeth Chang, and David Cooper. 2008. Checklist Inspections and Modifications: Applying Bloom's Taxonomy to Categorise Developer Comprehension. IEEE International Conference on Program Comprehension, 224--229.
[26]
T.R. Gopalakrishnan Nair. 2012. Significance of depth of inspection and inspection performance metrics for consistent defect management in software industry. IET Software 6 (December 2012), 524--535(11). Issue 6.
[27]
Kazuki Nishizono, Shuji Morisaki, Rodrigo Vivanco, and Kenichi Matsumoto. 2011. Source code comprehension strategies and metrics to predict comprehension effort in software maintenance and evolution tasks - An empirical study with industry practitioners. IEEE International Conference on Software Maintenance, ICSM, 473--481.
[28]
Masami Noro and Atsushi Sawada. 2015. Software Architecture and Specification Model for Customizable Code Inspection Tools. 230--237.
[29]
Jun-Suk Oh and Ho-Jin Choi. 2005. A reflective practice of automated and manual code reviews for a studio project. 37 -- 42.
[30]
Daniel Porto, Manoel Mendonça, and Sandra Fabbri. 2009. The Use of Reading Technique and Visualization for Program Understanding. Proceedings of the 21st International Conference on Software Engineering and Knowledge Engineering, SEKE 2009, 386--391.
[31]
Martin P. Robillard, Wesley Coelho, and Gail C. Murphy. 2004. How Effective Developers Investigate Source Code: An Exploratory Study. IEEE Trans. Softw. Eng. 30, 12 (Dec. 2004), 889--903.
[32]
Guoping Rong, He Zhang, Qi Shan, and Gaoxuan Liu. 2015. The Impacts of Supporting Materials on Code Reading: A Controlled Experiment. 88--95.
[33]
Per Runeson and Anneliese Andrews. 2003. Detection or Isolation of Defects? An Experimental Comparison of Unit Testing and Code Inspection. In Proceedings of the 14th International Symposium on Software Reliability Engineering (ISSRE '03). IEEE Computer Society, Washington, DC, USA, 3--.
[34]
Giuseppe Scanniello, Fausto Fasano, Andrea Lucia, and Genoveffa Tortora. 2013. Does software error/defect identification matter in the Italian industry? Software, IET 7 (04 2013), 76--84.
[35]
Forrest Shull, Ioana Rus, and Victor Basili. 2000. How Perspective-Based Reading Can Improve Requirements Inspections. Computer 33, 7 (July 2000), 73--79.
[36]
Thomas Thelin, Per Runeson, and Claes Wohlin. 2003. An experimental comparison of usage-based and checklist-based reading. Software Engineering, IEEE Transactions on 29 (09 2003), 687-- 704.
[37]
Guilherme H. Travassos, Forrest Shull, Jeffrey Carver, and Victor Basili. 2002. Reading Techniques for OO Design Inspections.
[38]
Jerod W. Wilkerson, Jay F. Nunamaker, Jr., and Rick Mercer. 2012. Comparing the Defect Reduction Benefits of Code Inspection and Test-Driven Development. IEEE Trans. Softw. Eng. 38, 3 (May 2012), 547--560.
[39]
Claes Wohlin, Per Runeson, Martin Höst, Magnus C. Ohlsson, Bjöorn Regnell, and Anders Wesslén. 2000. Experimentation in Software Engineering: An Introduction. Kluwer Academic Publishers, Norwell, MA, USA.

Index Terms

  1. Using auxiliary artifacts during code inspection activity: findings from an exploratory study

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      SBQS '19: Proceedings of the XVIII Brazilian Symposium on Software Quality
      October 2019
      330 pages
      ISBN:9781450372824
      DOI:10.1145/3364641
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      In-Cooperation

      • SBC: Brazilian Computer Society

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 28 October 2019

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Code Inspection
      2. Reading Techniques
      3. Software Artifacts

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      SBQS'19
      SBQS'19: XVIII Brazilian Symposium on Software Quality
      October 28 - November 1, 2019
      Fortaleza, Brazil

      Acceptance Rates

      SBQS '19 Paper Acceptance Rate 35 of 99 submissions, 35%;
      Overall Acceptance Rate 35 of 99 submissions, 35%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 74
        Total Downloads
      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 10 Dec 2024

      Other Metrics

      Citations

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media