Nothing Special   »   [go: up one dir, main page]

Repository logo
 

Model-Agnostic Explanations and Evaluation of Machine Learning

Loading...
Thumbnail Image

Date

Authors

Correia Ribeiro, Marco Tulio

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Despite many successes, complex machine learning systems are limited in their impact due to several issues regarding communication with humans: they are functionally black boxes, hard to debug and hard to evaluate properly. This communication is crucial though: humans are the ones who train, deploy and use machine learning models, and thus have to make trust and evaluation decisions. Furthermore, it is humans who try to improve these models, and having an understanding of their behavior is very valuable for this purpose. This dissertation addresses this communication problem by presenting model-agnostic explanations and evaluation, which improve the interaction between humans and any machine learning model. Specifically, we present: (1) Local Interpretable Model-Agnostic Explanations (LIME), an explanation technique that can explain any black box model by approximating it locally with a linear model, (2) Anchors, model-agnostic explanations that represent sufficient conditions for predictions, (3) Semantically Equivalent Adversaries and Adversarial Rules (SEAs and SEARs), semantic-preserving perturbations and rules that unearth brittleness bugs in text models, and (4) Implication Consistency, a new kind of evaluation metric that considers the relationship between model outputs in order to measure higher level thinking. We demonstrate that these contributions enable efficient communication between machine learning models and humans, empowering humans to better evaluate, improve, and assess trust in models.

Description

Thesis (Ph.D.)--University of Washington, 2018

Keywords

human-computer interaction, interpretability, machine learning, Computer science

Citation

DOI