Dcn v2: Improved deep & cross network and practical lessons for web-scale learning to rank systems
Proceedings of the web conference 2021, 2021•dl.acm.org
Learning effective feature crosses is the key behind building recommender systems.
However, the sparse and large feature space requires exhaustive search to identify effective
crosses. Deep & Cross Network (DCN) was proposed to automatically and efficiently learn
bounded-degree predictive feature interactions. Unfortunately, in models that serve web-
scale traffic with billions of training examples, DCN showed limited expressiveness in its
cross network at learning more predictive feature interactions. Despite significant research …
However, the sparse and large feature space requires exhaustive search to identify effective
crosses. Deep & Cross Network (DCN) was proposed to automatically and efficiently learn
bounded-degree predictive feature interactions. Unfortunately, in models that serve web-
scale traffic with billions of training examples, DCN showed limited expressiveness in its
cross network at learning more predictive feature interactions. Despite significant research …
Learning effective feature crosses is the key behind building recommender systems. However, the sparse and large feature space requires exhaustive search to identify effective crosses. Deep & Cross Network (DCN) was proposed to automatically and efficiently learn bounded-degree predictive feature interactions. Unfortunately, in models that serve web-scale traffic with billions of training examples, DCN showed limited expressiveness in its cross network at learning more predictive feature interactions. Despite significant research progress made, many deep learning models in production still rely on traditional feed-forward neural networks to learn feature crosses inefficiently.
In light of the pros/cons of DCN and existing feature interaction learning approaches, we propose an improved framework DCN-V2 to make DCN more practical in large-scale industrial settings. In a comprehensive experimental study with extensive hyper-parameter search and model tuning, we observed that DCN-V2 approaches outperform all the state-of-the-art algorithms on popular benchmark datasets. The improved DCN-V2 is more expressive yet remains cost efficient at feature interaction learning, especially when coupled with a mixture of low-rank architecture. DCN-V2 is simple, can be easily adopted as building blocks, and has delivered significant offline accuracy and online business metrics gains across many web-scale learning to rank systems at Google. Our code and tutorial are open-sourced as part of TensorFlow Recommenders (TFRS)1.
ACM Digital Library