Nothing Special   »   [go: up one dir, main page]

Opening the Black Box: The Promise and Limitations of Explainable Machine Learning in Cardiology

Can J Cardiol. 2022 Feb;38(2):204-213. doi: 10.1016/j.cjca.2021.09.004. Epub 2021 Sep 14.

Abstract

Many clinicians remain wary of machine learning because of longstanding concerns about "black box" models. "Black box" is shorthand for models that are sufficiently complex that they are not straightforwardly interpretable to humans. Lack of interpretability in predictive models can undermine trust in those models, especially in health care, in which so many decisions are- literally-life and death issues. There has been a recent explosion of research in the field of explainable machine learning aimed at addressing these concerns. The promise of explainable machine learning is considerable, but it is important for cardiologists who may encounter these techniques in clinical decision-support tools or novel research papers to have critical understanding of both their strengths and their limitations. This paper reviews key concepts and techniques in the field of explainable machine learning as they apply to cardiology. Key concepts reviewed include interpretability vs explainability and global vs local explanations. Techniques demonstrated include permutation importance, surrogate decision trees, local interpretable model-agnostic explanations, and partial dependence plots. We discuss several limitations with explainability techniques, focusing on the how the nature of explanations as approximations may omit important information about how black-box models work and why they make certain predictions. We conclude by proposing a rule of thumb about when it is appropriate to use black- box models with explanations rather than interpretable models.

Publication types

  • Research Support, Non-U.S. Gov't
  • Review

MeSH terms

  • Artificial Intelligence*
  • Cardiology / methods*
  • Cardiovascular Diseases / therapy*
  • Delivery of Health Care / organization & administration*
  • Humans
  • Machine Learning*