FMLGLN: Fast Multi-layer Graph Linear Network
Abstract Graph Convolutional Network (GCN) is an efficient way to deal with graph data.
However, GCN aggregates node information from the downstream layers in the training and
test process. In this way, GCN may expand nodes outside the current training batch, even
outside the training set. In the optimization process, updating the learnable matrix involves
all connected nodes in the graph convolutional operation. Therefore, learning in batched
and inductive ways is unsuitable for GCN, and large-scale graph data also challenge GCN …
However, GCN aggregates node information from the downstream layers in the training and
test process. In this way, GCN may expand nodes outside the current training batch, even
outside the training set. In the optimization process, updating the learnable matrix involves
all connected nodes in the graph convolutional operation. Therefore, learning in batched
and inductive ways is unsuitable for GCN, and large-scale graph data also challenge GCN …
Abstract
Graph Convolutional Network (GCN) is an efficient way to deal with graph data. However, GCN aggregates node information from the downstream layers in the training and test process. In this way, GCN may expand nodes outside the current training batch, even outside the training set. In the optimization process, updating the learnable matrix involves all connected nodes in the graph convolutional operation. Therefore, learning in batched and inductive ways is unsuitable for GCN, and large-scale graph data also challenge GCN. To this end, this work proposes a Fast Multi-layer Graph Linear Network (FMLGLN) with a straightforward structure and a low amount of hyper-parameters to deal with large-scale graph data. In the implementation, FMLGLN raises the normalized adjacency matrix to the power of different values and multiplies these matrices with original features to get embedding features of nodes in pre-processing. Finally, FMLGLN concatenates these embedding features and uses a linear neural network to conduct training. The matrix multiplication is linear, and the network structure consists of linear layers, bringing a fast training speed. The features of FMLGLN are (1) batched training, (2) inductive learning, and (3) suiting large-scale graph data. Experimental results on large-scale graph data demonstrate the effectiveness of the proposed FMLGLN.
Elsevier