BAIT: Benchmarking (Embedding) Architectures for Interactive Theorem-Proving
DOI:
https://doi.org/10.1609/aaai.v38i9.28931Keywords:
KRR: Automated Reasoning and Theorem Proving, ML: Evaluation and Analysis, ML: Representation Learning, ML: Graph-based Machine LearningAbstract
Artificial Intelligence for Theorem Proving (AITP) has given rise to a plethora of benchmarks and methodologies, particularly in Interactive Theorem Proving (ITP). Research in the area is fragmented, with a diverse set of approaches being spread across several ITP systems. This presents a significant challenge to the comparison of methods, which are often complex and difficult to replicate. Addressing this, we present BAIT, a framework for the fair and streamlined comparison of learning approaches in ITP. We demonstrate BAIT’s capabilities with an in-depth comparison, across several ITP benchmarks, of state-of-the-art architectures applied to the problem of formula embedding. We find that Structure Aware Transformers perform particularly well, improving on techniques associated with the original problem sets. BAIT also allows us to assess the end-to-end proving performance of systems built on interactive environments. This unified perspective reveals a novel end-to-end system that improves on prior work. We also provide a qualitative analysis, illustrating that improved performance is associated with more semantically-aware embeddings. By streamlining the implementation and comparison of Machine Learning algorithms in the ITP context, we anticipate BAIT will be a springboard for future research.Downloads
Published
2024-03-24
How to Cite
Lamont, S., Norrish, M., Dezfouli, A., Walder, C., & Montague, P. (2024). BAIT: Benchmarking (Embedding) Architectures for Interactive Theorem-Proving. Proceedings of the AAAI Conference on Artificial Intelligence, 38(9), 10607-10615. https://doi.org/10.1609/aaai.v38i9.28931
Issue
Section
AAAI Technical Track on Knowledge Representation and Reasoning