DeepSaDe: Learning Neural Networks That Guarantee Domain Constraint Satisfaction
DOI:
https://doi.org/10.1609/aaai.v38i11.29109Keywords:
ML: Neuro-Symbolic Learning, CSO: Constraint Optimization, CSO: Constraint Satisfaction, CSO: Satisfiability, CSO: Satisfiability Modulo Theories, ML: Classification and Regression, ML: Ethics, Bias, and Fairness, ML: Multi-class/Multi-label Learning & Extreme Classification, ML: OptimizationAbstract
As machine learning models, specifically neural networks, are becoming increasingly popular, there are concerns regarding their trustworthiness, specially in safety-critical applications, e.g. actions of an autonomous vehicle must be safe. There are approaches that can train neural networks where such domain requirements are enforced as constraints, but they either cannot guarantee that the constraint will be satisfied by all possible predictions (even on unseen data) or they are limited in the type of constraints that can be enforced. In this paper, we present an approach to train neural networks which can enforce a wide variety of constraints and guarantee that the constraint is satisfied by all possible predictions. The approach builds on earlier work where learning linear models is formulated as a constraint satisfaction problem (CSP). To make this idea applicable to neural networks, two crucial new elements are added: constraint propagation over the network layers, and weight updates based on a mix of gradient descent and CSP solving. Evaluation on various machine learning tasks demonstrates that our approach is flexible enough to enforce a wide variety of domain constraints and is able to guarantee them in neural networks.Downloads
Published
2024-03-24
How to Cite
Goyal, K., Dumancic, S., & Blockeel, H. (2024). DeepSaDe: Learning Neural Networks That Guarantee Domain Constraint Satisfaction. Proceedings of the AAAI Conference on Artificial Intelligence, 38(11), 12199-12207. https://doi.org/10.1609/aaai.v38i11.29109
Issue
Section
AAAI Technical Track on Machine Learning II