Learning memory-guided normality for anomaly detection

H Park, J Noh, B Ham - … of the IEEE/CVF conference on …, 2020 - openaccess.thecvf.com
Proceedings of the IEEE/CVF conference on computer vision and …, 2020openaccess.thecvf.com
We address the problem of anomaly detection, that is, detecting anomalous events in a
video sequence. Anomaly detection methods based on convolutional neural networks
(CNNs) typically leverage proxy tasks, such as reconstructing input video frames, to learn
models describing normality without seeing anomalous samples at training time, and
quantify the extent of abnormalities using the reconstruction error at test time. The main
drawbacks of these approaches are that they do not consider the diversity of normal patterns …
Abstract
We address the problem of anomaly detection, that is, detecting anomalous events in a video sequence. Anomaly detection methods based on convolutional neural networks (CNNs) typically leverage proxy tasks, such as reconstructing input video frames, to learn models describing normality without seeing anomalous samples at training time, and quantify the extent of abnormalities using the reconstruction error at test time. The main drawbacks of these approaches are that they do not consider the diversity of normal patterns explicitly, and the powerful representation capacity of CNNs allows to reconstruct abnormal video frames. To address this problem, we present an unsupervised learning approach to anomaly detection that considers the diversity of normal patterns explicitly, while lessening the representation capacity of CNNs. To this end, we propose to use a memory module with a new update scheme where items in the memory record prototypical patterns of normal data. We also present novel feature compactness and separateness losses to train the memory, boosting the discriminative power of both memory items and deeply learned features from normal data. Experimental results on standard benchmarks demonstrate the effectiveness and efficiency of our approach, which outperforms the state of the art.
openaccess.thecvf.com