Input convex neural networks
This paper presents the input convex neural network architecture. These are scalar-valued
(potentially deep) neural networks with constraints on the network parameters such that the
output of the network is a convex function of (some of) the inputs. The networks allow for
efficient inference via optimization over some inputs to the network given others, and can be
applied to settings including structured prediction, data imputation, reinforcement learning,
and others. In this paper we lay the basic groundwork for these models, proposing methods …
(potentially deep) neural networks with constraints on the network parameters such that the
output of the network is a convex function of (some of) the inputs. The networks allow for
efficient inference via optimization over some inputs to the network given others, and can be
applied to settings including structured prediction, data imputation, reinforcement learning,
and others. In this paper we lay the basic groundwork for these models, proposing methods …
Abstract
This paper presents the input convex neural network architecture. These are scalar-valued (potentially deep) neural networks with constraints on the network parameters such that the output of the network is a convex function of (some of) the inputs. The networks allow for efficient inference via optimization over some inputs to the network given others, and can be applied to settings including structured prediction, data imputation, reinforcement learning, and others. In this paper we lay the basic groundwork for these models, proposing methods for inference, optimization and learning, and analyze their representational power. We show that many existing neural network architectures can be made input-convex with a minor modification, and develop specialized optimization algorithms tailored to this setting. Finally, we highlight the performance of the methods on multi-label prediction, image completion, and reinforcement learning problems, where we show improvement over the existing state of the art in many cases.
proceedings.mlr.press