GLASS: GNN with labeling tricks for subgraph representation learning

X Wang, M Zhang - International Conference on Learning …, 2021 - openreview.net
International Conference on Learning Representations, 2021openreview.net
Despite the remarkable achievements of Graph Neural Networks (GNNs) on graph
representation learning, few works have tried to use them to predict properties of subgraphs
in the whole graph. The existing state-of-the-art method SubGNN introduces an overly
complicated subgraph-level GNN model which synthesizes three artificial channels each of
which has two carefully designed subgraph-level message passing modules, yet only
slightly outperforms a plain GNN which performs node-level message passing and then …
Despite the remarkable achievements of Graph Neural Networks (GNNs) on graph representation learning, few works have tried to use them to predict properties of subgraphs in the whole graph. The existing state-of-the-art method SubGNN introduces an overly complicated subgraph-level GNN model which synthesizes three artificial channels each of which has two carefully designed subgraph-level message passing modules, yet only slightly outperforms a plain GNN which performs node-level message passing and then pools node embeddings within the subgraph. By analyzing SubGNN and plain GNNs, we find that the key for subgraph representation learning might be to distinguish nodes inside and outside the subgraph. With this insight, we propose an expressive and scalable labeling trick, namely max-zero-one, to enhance plain GNNs for subgraph tasks. The resulting model is called GLASS (GNN with LAbeling trickS for Subgraph). We theoretically characterize GLASS's expressive power. Compared with SubGNN, GLASS is more expressive, more scalable, and easier to implement. Experiments on eight benchmark datasets show that GLASS outperforms the strongest baseline by on average. And ablation analysis shows that our max-zero-one labeling trick can boost the performance of a plain GNN by up to in maximum, which illustrates the effectiveness of labeling trick on subgraph tasks. Furthermore, training a GLASS model only takes time needed for a SubGNN on average.
openreview.net
Showing the best result for this search. See all results