Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3661167.3661269acmotherconferencesArticle/Chapter ViewAbstractPublication PageseaseConference Proceedingsconference-collections
keynote

The Role of Software Measurement in Assured LLM-Based Software Engineering

Published: 18 June 2024 Publication History

Abstract

Assured Large Language Model Software Engineering (Assured LLMSE) addresses the twin challenges: 1. Ensuring LLM-generated code does not regress the properties of the original code 2. Quantifying the improvement over the original archived by the improve code in a verifiable and measurable way.
In so doing, the Assured LLMSE approach tackles the problem of LLMs’ tendency to hallucinate, as well as providing confidence that generated code improves an existing code base. Software testing and measurement play critical roles in this improvement process: testing is the guard against regression, while measurement provides the quantifiable assurance of improvement. Assured LLMSE takes its inspiration from previous work on genetic improvement, for which software measurement also plays a central role. In this keynote we outline the Assured LLMSE approach, highlighting the role of software measurement in the provision of quantifiable, verifiable assurances for code that originates from LLM–based inference. This paper is an outline of the content of the keynote by Mark Harman at the 28th International Conference on Evaluation and Assessment in Software Engineering.
This is joint work with Nadia Alshahwan, Andrea Aquino, Jubin Chheda, Anastasia Finegenova, Inna Harper, Mitya Lyubarskiy, Neil Maiden, Alexander Mols, Shubho Sengupta, Rotem Tal, Alexandru Marginean, and Eddy Wang.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
EASE '24: Proceedings of the 28th International Conference on Evaluation and Assessment in Software Engineering
June 2024
728 pages
ISBN:9798400717017
DOI:10.1145/3661167
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 June 2024

Check for updates

Author Tags

  1. Automated Code Generation
  2. CodeLlama
  3. Genetic Improvement (GI)
  4. Large Language Models (LLMs)
  5. Llama
  6. Search Based Software Engineering (SBSE)

Qualifiers

  • Keynote
  • Research
  • Refereed limited

Conference

EASE 2024

Acceptance Rates

Overall Acceptance Rate 71 of 232 submissions, 31%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 62
    Total Downloads
  • Downloads (Last 12 months)62
  • Downloads (Last 6 weeks)8
Reflects downloads up to 24 Nov 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media