Nothing Special   »   [go: up one dir, main page]

Deep Learning (DL) is a special class of Artificial Intelligence (AI) algorithms, studying the training of Deep Neural Networks (DNNs). Thanks to the modularity of their structure, these models are effective in a variety of problems ranging from computer vision to speech recognition. Particularly in the last few years, DL has achieved impressive results. Nonetheless, the excitement around the field may remain disappointed since there are still many open issues. In this thesis, we consider the Learning from Constraints framework. In this setting, learning is conceived as the problem of finding task functions while respecting a number of constraints representing the available knowledge. This setting allows considering different types of knowledge (including, but not exclusively, the supervisions) and mitigating some of the DL limits. DNN deployment, indeed, is still precluded in those contexts where manual labelling is expensive. Active Learning aims at solving this problem by requiring supervision only on few unlabelled samples. In this scenario, we propose to take consider domain knowledge. Indeed, the relationships among classes offer a way to spot incoherent predictions, i.e., predictions where the model may most likely need supervision. We develop a framework where first-order-logic knowledge is converted into constraints and their violation is checked as a guide for sample selection. Another DL limit is the fragility of DNNs when facing adversarial examples, carefully perturbed samples causing misclassifications at test time. As in the previous case, we propose to employ domain knowledge since it offers a natural guide to detect adversarial examples. Indeed, while the domain knowledge is fulfilled over the training data, the same does not hold true outside this distribution. Therefore, a constrained classifier can naturally reject predictions associated to incoherent predictions, i.e., in this case, adversarial examples. While some relationships are known properties of the considered environments, DNNs can also autonomously develop new relation patterns. Therefore, we also propose a novel Learning of Constraints formulation which aims at understanding which logic constraints are present among the task functions. This also allow explaining DNNs, otherwise commonly considered black-box classifiers. Indeed, the lack of transparency is a major limit of DL, preventing its application in many safety-critical domains. In a first case, we propose a pair of neural networks, where one learns the relationships among the outputs of the other one, and provides First-Order Logic (FOL)-based descriptions. Different typologies of explanations are evaluated in distinct experiments, showing that the proposed approach discovers new knowledge and can improve the classifier performance. In a second case, we propose an end-to-end differentiable approach, extracting logic explanations from the same classifier. The method relies on an entropy-based layer which automatically identifies the most relevant concepts. This enables the distillation of concise logic explanations in several safety-critical domains, outperforming state-of-the-art white-box models.

On the Two-fold Role of Logic Constraints in Deep Learning / Ciravegna, Gabriele. - (2022).

On the Two-fold Role of Logic Constraints in Deep Learning

Gabriele Ciravegna
2022

Abstract

Deep Learning (DL) is a special class of Artificial Intelligence (AI) algorithms, studying the training of Deep Neural Networks (DNNs). Thanks to the modularity of their structure, these models are effective in a variety of problems ranging from computer vision to speech recognition. Particularly in the last few years, DL has achieved impressive results. Nonetheless, the excitement around the field may remain disappointed since there are still many open issues. In this thesis, we consider the Learning from Constraints framework. In this setting, learning is conceived as the problem of finding task functions while respecting a number of constraints representing the available knowledge. This setting allows considering different types of knowledge (including, but not exclusively, the supervisions) and mitigating some of the DL limits. DNN deployment, indeed, is still precluded in those contexts where manual labelling is expensive. Active Learning aims at solving this problem by requiring supervision only on few unlabelled samples. In this scenario, we propose to take consider domain knowledge. Indeed, the relationships among classes offer a way to spot incoherent predictions, i.e., predictions where the model may most likely need supervision. We develop a framework where first-order-logic knowledge is converted into constraints and their violation is checked as a guide for sample selection. Another DL limit is the fragility of DNNs when facing adversarial examples, carefully perturbed samples causing misclassifications at test time. As in the previous case, we propose to employ domain knowledge since it offers a natural guide to detect adversarial examples. Indeed, while the domain knowledge is fulfilled over the training data, the same does not hold true outside this distribution. Therefore, a constrained classifier can naturally reject predictions associated to incoherent predictions, i.e., in this case, adversarial examples. While some relationships are known properties of the considered environments, DNNs can also autonomously develop new relation patterns. Therefore, we also propose a novel Learning of Constraints formulation which aims at understanding which logic constraints are present among the task functions. This also allow explaining DNNs, otherwise commonly considered black-box classifiers. Indeed, the lack of transparency is a major limit of DL, preventing its application in many safety-critical domains. In a first case, we propose a pair of neural networks, where one learns the relationships among the outputs of the other one, and provides First-Order Logic (FOL)-based descriptions. Different typologies of explanations are evaluated in distinct experiments, showing that the proposed approach discovers new knowledge and can improve the classifier performance. In a second case, we propose an end-to-end differentiable approach, extracting logic explanations from the same classifier. The method relies on an entropy-based layer which automatically identifies the most relevant concepts. This enables the distillation of concise logic explanations in several safety-critical domains, outperforming state-of-the-art white-box models.
File in questo prodotto:
File Dimensione Formato  
_Ph_D__Thesis__Tesi_di_Dottorato.pdf

accesso aperto

Descrizione: Deep Learning (DL) is a special class of Artificial Intelligence (AI) algo- rithms, studying the training of Deep Neural Networks (DNNs). Thanks to the modularity of their structure, these models are effective in a variety of problems ranging from computer vision to speech recognition. Particularly in the last few years, DL has achieved impressive results. Nonetheless, the excitement around the field may remain disappointed since there are still many open issues. In this thesis, we consider the learning from constraints framework. In this setting, learning is conceived as the problem of finding task functions while respecting a number of constraints representing the available knowledge. This setting al- lows considering different types of knowledge (including, but not exclusively, the supervisions) and mitigating some of the DL limits. DNN deployment, in- deed, is still precluded in those contexts where manual labelling is expensive. Active Learning aims at solving this problem by requiring supervision only on few unlabelled samples. In this scenario, we propose to take consider domain knowledge. Indeed, the relationships among classes offer a way to spot incoher- ent predictions, i.e., predictions where the model may most likely need supervi- sion. We develop a framework where first-order-logic knowledge is converted into constraints and their violation is checked as a guide for sample selection. Another DL limit is the fragility of DNNs when facing adversarial examples, carefully perturbed samples causing misclassifications at test time. As in the previous case, we propose to employ domain knowledge since it offers a nat- ural guide to detect adversarial examples. Indeed, while the domain knowl- edge is fulfilled over the training data, the same does not hold true outside this distribution. Therefore, a constrained classifier can naturally reject predictions associated to incoherent predictions, i.e., in this case, adversarial examples. While some relationships are known properties of the considered environ- ments, DNNs can also autonomously develop new relation patterns. Therefore, we also propose a novel learning of constraints formulation which aims at un- derstanding which logic constraints are present among the task functions. This also allow explaining DNNs, otherwise commonly considered black-box clas- sifiers. Indeed, the lack of transparency is a major limit of DL, preventing its application in many safety-critical domains. In a first case, we propose a pair of neural networks, where one learns the relationships among the outputs of the other one, and provides First-Order Logic (FOL)-based descriptions. Different typologies of explanations are evaluated in distinct experiments, showing that the proposed approach discovers new knowledge and can improve the clas- sifier performance. In a second case, we propose an end-to-end differentiable approach, extracting logic explanations from the same classifier. The method re- lies on an entropy-based layer which automatically identifies the most relevant concepts. This enables the distillation of concise logic explanations in several safety-critical domains, outperforming state-of-the-art white-box models.
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Creative commons
Dimensione 13.45 MB
Formato Adobe PDF
13.45 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2980675