Nothing Special   »   [go: up one dir, main page]

×
Please click here if you are not redirected within a few seconds.
Discriminatively trained neural classifiers can be trusted, only when the input data comes from the training distribution (in-distribution). Therefore, detecting out-of-distribution (OOD) samples is very important to avoid classification errors.
Apr 27, 2019
Apr 27, 2019 · Discriminatively trained neural classifiers can be trusted, only when the input data comes from the training distribution (in-distribution).
Apr 27, 2019 · Discriminatively trained neural classifiers can be trusted, only when the input data comes from the training distribution (in-distribution).
In essence, our method jointly trains both classification and generative neural networks for out-of-distribution. We demonstrate its effectiveness on various ...
Missing: Analysis | Show results with:Analysis
Analysis of Confident-Classifiers for Out-of-Distribution Detection ; Sachin Vernekar · Ashish Gaurav · Taylor Denouden · Buu Phan · Vahdat Abdelzad · Rick Salay.
Code for ICLR 2019 SafeML workshop paper: Analysis of Confident-Classifiers for Out-of-distribution Detection. 5 stars 4 forks Branches Tags Activity.
The problem of detecting whether a test sample is from in-distribution (i.e., train- ing distribution by a classifier) or out-of-distribution sufficiently ...
Oct 19, 2022 · In this article, we'll show you a novel and simple adjustment to model predicted probabilities that can improve OOD detection with classifier models trained on ...
People also ask
This paper suggests training a classifier by adding an explicit "reject" class for OOD samples by minimizing the standard cross-entropy loss on ...