Tom Melham
2020
Recalibrating classifiers for interpretable abusive content detection
Bertie Vidgen
|
Scott Hale
|
Sam Staton
|
Tom Melham
|
Helen Margetts
|
Ohad Kammar
|
Marcin Szymczak
Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science
We investigate the use of machine learning classifiers for detecting online abuse in empirical research. We show that uncalibrated classifiers (i.e. where the ‘raw’ scores are used) align poorly with human evaluations. This limits their use for understanding the dynamics, patterns and prevalence of online abuse. We examine two widely used classifiers (created by Perspective and Davidson et al.) on a dataset of tweets directed against candidates in the UK’s 2017 general election. A Bayesian approach is presented to recalibrate the raw scores from the classifiers, using probabilistic programming and newly annotated data. We argue that interpretability evaluation and recalibration is integral to the application of abusive content classifiers.
Search
Co-authors
- Bertie Vidgen 1
- Scott Hale 1
- Sam Staton 1
- Helen Margetts 1
- Ohad Kammar 1
- show all...