On the importance of gradient norm in PAC-Bayesian bounds
Advances in Neural Information Processing Systems, 2022•proceedings.neurips.cc
Generalization bounds which assess the difference between the true risk and the empirical
risk have been studied extensively. However, to obtain bounds, current techniques use strict
assumptions such as a uniformly bounded or a Lipschitz loss function. To avoid these
assumptions, in this paper, we follow an alternative approach: we relax uniform bounds
assumptions by using on-average bounded loss and on-average bounded gradient norm
assumptions. Following this relaxation, we propose a new generalization bound that exploits …
risk have been studied extensively. However, to obtain bounds, current techniques use strict
assumptions such as a uniformly bounded or a Lipschitz loss function. To avoid these
assumptions, in this paper, we follow an alternative approach: we relax uniform bounds
assumptions by using on-average bounded loss and on-average bounded gradient norm
assumptions. Following this relaxation, we propose a new generalization bound that exploits …
Abstract
Generalization bounds which assess the difference between the true risk and the empirical risk have been studied extensively. However, to obtain bounds, current techniques use strict assumptions such as a uniformly bounded or a Lipschitz loss function. To avoid these assumptions, in this paper, we follow an alternative approach: we relax uniform bounds assumptions by using on-average bounded loss and on-average bounded gradient norm assumptions. Following this relaxation, we propose a new generalization bound that exploits the contractivity of the log-Sobolev inequalities. These inequalities add an additional loss-gradient norm term to the generalization bound, which is intuitively a surrogate of the model complexity. We apply the proposed bound on Bayesian deep nets and empirically analyze the effect of this new loss-gradient norm term on different neural architectures.
proceedings.neurips.cc