AC/DC: in-database learning thunderstruck
Proceedings of the second workshop on data management for end-to-end machine …, 2018•dl.acm.org
We report on the design and implementation of the AC/DC gradient descent solver for a
class of optimization problems over normalized databases. AC/DC decomposes an
optimization problem into a set of aggregates over the join of the database relations. It then
uses the answers to these aggregates to iteratively improve the solution to the problem until
it converges. The challenges faced by AC/DC are the large database size, the mixture of
continuous and categorical features, and the large number of aggregates to compute …
class of optimization problems over normalized databases. AC/DC decomposes an
optimization problem into a set of aggregates over the join of the database relations. It then
uses the answers to these aggregates to iteratively improve the solution to the problem until
it converges. The challenges faced by AC/DC are the large database size, the mixture of
continuous and categorical features, and the large number of aggregates to compute …
We report on the design and implementation of the AC/DC gradient descent solver for a class of optimization problems over normalized databases. AC/DC decomposes an optimization problem into a set of aggregates over the join of the database relations. It then uses the answers to these aggregates to iteratively improve the solution to the problem until it converges.
The challenges faced by AC/DC are the large database size, the mixture of continuous and categorical features, and the large number of aggregates to compute. AC/DC addresses these challenges by employing a sparse data representation, factorized computation, problem reparameterization under functional dependencies, and a data structure that supports shared computation of aggregates.
To train polynomial regression models and factorization machines of up to 154K features over the natural join of all relations from a real-world dataset of up to 86M tuples, AC/DC needs up to 30 minutes on one core of a commodity machine. This is up to three orders of magnitude faster than its competitors R, MadLib, libFM, and TensorFlow whenever they finish and thus do not exceed memory limitation, 24-hour timeout, or internal design limitations.
ACM Digital Library