Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Geometric Structures Generated by the Same Dynamics. Recent Results and Challenges
Next Article in Special Issue
A Generative Model for Topic Discovery and Polysemy Embeddings on Directed Attributed Networks
Previous Article in Journal
Heisenberg Parabolic Subgroups of Exceptional Non-Compact G2(2) and Invariant Differential Operators
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Self-Supervised Graph Representation Learning via Information Bottleneck

1
School of Artificial Intelligence, Hebei University of Technology, Tianjin 300401, China
2
Hebei Province Key Laboratory of Big Data Calculation, Tianjin 300401, China
3
Defense Engineering Institute AMS, PLA, Beijing 100036, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(4), 657; https://doi.org/10.3390/sym14040657
Submission received: 27 February 2022 / Revised: 19 March 2022 / Accepted: 22 March 2022 / Published: 24 March 2022
(This article belongs to the Special Issue Symmetry and Asymmetry Studies on Graph Data Mining)

Abstract

:
Graph representation learning has become a mainstream method for processing network structured data, and most graph representation learning methods rely heavily on labeling information for downstream tasks. Since labeled information is rare in the real world, adopting self-supervised learning to solve the graph neural network problem is a significant challenge. Currently, existing graph neural network approaches attempt to maximize mutual information for self-supervised learning, which leads to a large amount of redundant information in the graph representation and thus affects the performance of downstream tasks. Therefore, the self-supervised graph information bottleneck (SGIB) proposed in this paper uses the symmetry and asymmetry of graphs to establish comparative learning and introduces the information bottleneck theory as a loss training model. This model extracts the common features of both views and the independent features of each view by maximizing the mutual information estimation between the local high-level representation of one view and the global summary vector of the other view. It also removes redundant information not relevant to the target task by minimizing the mutual information between the local high-level representations of the two views. Based on the extensive experimental results of three public datasets and two large-scale datasets, it has been shown that the SGIB model can learn higher quality node representations and that several classical network analysis experiments such as node classification and node clustering can be improved compared to existing models in an unsupervised environment. In addition, an in-depth network experiment is designed for in-depth analysis, and the results show that the SGIB model can also alleviate the over-smoothing problem to a certain extent. Therefore, we can infer from different network analysis experiments that it would be an effective improvement of the performance of downstream tasks through introducing information bottleneck theory to remove redundant information.

1. Introduction

A graph is a kind of data structure that models a set of nodes and the edges between them, where the node represents a specific class of entity and the edge represents the connection between them. As many learning tasks require processing information that contains large numbers of objects and their relationships, graph-structured data have received increasing attention. Graph-structured data can be utilized in a variety of applications in the real world, including analysis of social networks, modeling of physical systems, learning of molecular fingerprints, prediction of protein interfaces, and classification of diseases. To better study graph-structured data, a model is needed to learn vector representations from the input graphs.
Driven by recent advances in deep learning [1], the paradigm of graph learning has shifted from structural pattern discovery to graph representation learning [2,3] in the past few years. Specifically, graph representation learning converts graph vertices, edges, or subgraphs into low-dimensional embeddings [4,5,6], thus preserving the important structural information of the graph. Studying graph representations can be used for many downstream tasks, such as node classification [7,8], graph classification [9,10], link prediction [11,12], etc. Most of the existing graph neural network models are built in a supervised manner, which requires a large amount of labeled input for training. However, labeling graph data turns out to be impractically time-/resources-consuming in many applications, for example, with the possibly prolonged and expensive issues that exist in in vivo animal experiments required for the pharmacological effects produced by graph-represented drug molecules. Therefore, recent research has focused on developing self-supervised learning in graph neural networks that require only limited or even no labeling.
However, the current graph neural network still has some obstacles to overcome. One of the most intractable problems is that the features of neighboring nodes may contain redundant information, which may negatively affect the prediction of the current node. In addition, the over-reliance on edge messages during training makes the trained model more vulnerable to noise and adversarial attacks on the input graph structure. There is limited research on how to solve the problem of containing redundant information in the input graph. In this paper, we believe that the key to clearing out redundant information is to employ the information bottleneck theory. The information bottleneck theory requires that the encoder contains information as little as possible while completing the downstream task. The encoder learned from the information bottleneck is expected to be more robust for redundant information removed.
A self-supervised graph representation learning via information bottleneck (SGIB) is proposed in this paper. The model uses the symmetry and asymmetry of the graph to introduce comparative learning and employs the information bottleneck as a loss training model to achieve the purpose of self-supervised learning. For the nodes in the network, all of their feature information and topological information do not necessarily play a positive role for the downstream tasks, and some pieces of information even may have a negative impact. Ideally, the encoded node vector contains all the information necessary for the downstream task, so in the process of training the model, the encoded node vector is required to contain as little redundant information as possible.
Specifically, the SGIB model trains the encoder by maximizing the mutual information between the node-level representation of one view and the graph-level representation of the other view, while minimizing the mutual information between the node-level representations of the two views. Such a strategy enables the optimized model to process the downstream task by extracting the common features of both views and independent features of each view, while removing redundant information that is irrelevant to the target task. Therefore, the graph representations learned by SGIB will be more robust and distinguishable. In summary, the main contributions of this study are summarized as follows:
  • Introducing information bottleneck into contrast learning, thus achieving the purpose of self-supervised learning.
  • Considering the neglect of redundant data in past studies, this paper proposes the use of the information bottleneck as the objective function of the optimization model. The information bottleneck was applied by maximizing the mutual information between one view node level representation and another view graph level representation, while minimizing the mutual information between two view node level representations.
  • A variety of network analysis experiments, including node classification and node clustering, were conducted on three public datasets as well as two large-scale datasets. Numerous experiments show that the method outperforms the best existing methods. In addition, an in-depth analysis of the model is conducted, and the experimental results show that SGIB can alleviate the over-smoothing problem to a certain extent.

2. Related Work

In recent years, many researchers have investigated the problem of how to build unsupervised learning through consistency. The most representative solution is the deep graph infomax (DGI) [13], which first embeds an input graph, then summarizes the input graph into a vector by a readout function, and finally maximizes the mutual information between the vector representation of the input graph nodes and the vector representation at the graph level. Following DGI, graph representation learning via graphical mutual information maximization (GMI) [14] proposes a node-level objective comparison, which maximizes the node representation with the input attribute features while maximizing the mutual information between the node representation and the topological information. Contrastive multi-view representation learning on graphs (MVGRL) [15] proposes a graph diffusion method to expand another input graph, constructs subgraphs by uniform sampling, and then compares the node representation with the global embedding of the two views to obtain the vector representation of the graph. Deep graph contrastive representation learning (GRACE) [16] extends the idea of maximizing graph mutual information across nodes and subgraphs, generates two associated graphs by random destruction, and uses the contrast loss function to maximize the node representation in the two views. Graph contrastive learning with adaptive augmentation (GCA) [17] learns the graph representation by creating new data with reasonable transformations and maximizing the mutual information under different expansions using feature consistency. Heterogeneous graph information bottleneck (HGIB) [18] utilizes information bottleneck to implement the consensus hypothesis between different meta-paths in an unsupervised manner. Subgraph information bottleneck (SIB) [19] recognize subgraphs by removing redundancy and noise from information bottleneck in the field of graph classification. Variational graph information bottleneck (VGIB) [20] introduces the noise injection method to recognize subgraphs based on graph information bottleneck.
Common among methods that do not apply information bottleneck is the reliance on maximizing mutual information between whole and part, part and part, and different views originating from the same network to perform unsupervised training. However, none of the above works using a consistent problem-solving strategy can handle the redundant data contained in the input data. The method of information bottleneck theory cannot solve the node-level task of the homogeneous graph. To address these problems, this paper proposes a self-supervised graph representation learning method based on an information bottleneck, which can extract both the common features of two views and independent features of each view, while removing redundant information irrelevant to the target task, resulting in a node representation of higher quality.

3. Preliminaries

3.1. Homogeneous Graph

Given an undirected graph G = (V, E), V represents the set of nodes and E represents the set of edges. Information about the relationship between nodes is provided in the form of an adjacency matrix, A N × N . X = { x 1 , x 2 , , x N } denotes a set of node features of the input data, N represents the number of nodes in the graph, and x i F , represents the features of node i. In all experiments of this paper, it is assumed that the edges between nodes are unweighted; i.e., if there are edges i j in the graph, then Aij = 1, otherwise Aij = 0.

3.2. Mutual Information Estimation

Mutual information (MI) is a Shannon entropy-based measure of the degree of interdependence among random variables. Unlike the common similarity measures, mutual information can capture the nonlinear correlation between variables, so it can be considered as a measure of the true dependence between variables. For two random variables X and Y, the mutual information between them is shown as follows:
I ( X ; Y ) = H ( X ) H ( X | Y ) = x , y p ( x , y ) log p ( x , y ) p ( x ) p ( y ) ,
H(X) is the entropy of information for X, while H(X|Y) means under the condition of a known random variable Y, the conditional entropy of a random variable X. From the perspective of probability, mutual information is derived from the joint probability distribution p(x,y) and the marginal probability distribution p(x),p(y) of random variables X and Y.
However, since the mutual information can be computed only in the discrete case and in the continuous case with a finite number of known distributions, its lower bound is usually estimated using some known algorithms. A neural network-based mutual information estimation model, mutual information neural estimator (MINE) [21], uses a Donsker-Varadhan estimator based on KL divergence to derive a lower bound on the mutual information, where the function T is usually the neural network:
I ( X ; Y ) sup T E P X Y [ T ] log ( E P X P Y [ e T ] ) .
In this paper, the mutual information estimation is applied to the local vector representation and global vector representation of nodes to achieve the self-supervised learning of the network.

3.3. Information Bottleneck

The core principle of information bottleneck (IB) theory is that the optimal graph representation should contain the minimum and sufficient information to complete the downstream prediction task [22]. In other words, the amount of information about the task does not change due to the encoding process, as shown in Equation (3):
I ( x ; y ) = I ( h ; y ) ,
where x is the input data, h is the obtained graph representation, and y is the label. In order to make the graph representation robust, the information bottleneck principle [23] tries to discard all information from the input as the input information is not helpful for the task of predicting the labels. The information bottleneck requires the graph representation to provide the maximum amount of information about the target to make the prediction accurate, while also preventing the graph representation from obtaining redundant information from the data that is not relevant to the prediction. Therefore, the information bottleneck [24] minimizes the mutual information between the data x and its representation h, while maximizing the mutual information between the representation h and the label y. The objective function is shown in Equation (4):
R I B ( θ ) = I ( y ; h ) β I θ ( x ; h ) ,
where θ is denoted as the parameter of the neural network and β is the hyperparameter of the control weights. The second part can be split into two parts according to the chain rule of mutual information, as shown in Equation (5):
I ( x ; h ) = I ( x ; h | y ) + I ( y ; h ) ,
where the second term does not vary according to h, because the representation h contains the minimum and sufficient information that can accomplish the target task, as shown by Equation (3). The first term represents the information that is not useful for the target task, i.e., redundant information. Therefore, minimizing mutual information I(x;h) is equivalent to minimizing redundant information I(x;h|y) [25], and rewriting Equation (4) gives the formula for the information bottleneck, as shown in Equation (6):
R I B ( θ ) = I ( y ; h ) β I ( x ; h | y ) .
However, this paper focuses on training neural networks that can remove redundant information when the labels of downstream tasks are not available. This part will be elaborated on in Section 4.

4. Self-Supervised Graph Representation Learning via Information Bottleneck

SGIB is a self-supervised graph representation learning model based on information bottleneck, and the model framework is shown in Figure 1. The SGIB model is divided into three modules. Firstly, two random Dropedge [26] operations are performed to obtain two subgraphs different from the original image, and contrast learning is used to extend the information bottleneck to self-supervised learning. Secondly, to extract the common features of the two subgraphs and independent features of each view, SGIB compares the mutual information between the graphical encoding from the first-order neighbor and the graph-level representation of the other subgraph, respectively. In addition, to remove redundant information irrelevant to the target task, SGIB also compares the mutual information between the first-order graphical encodings of the two subgraphs. Finally, the information bottleneck is used as the loss function of the model to complete the training and optimization of the objective function.

4.1. Self-Supervised Information Bottleneck

In order to train a neural network that can remove redundant information and obtain high-quality node representations without downstream task labeling as agnostic, we use the contrast learning method in self-supervised learning. Contrast learning is used to learn the feature representation by comparing the data with positive and negative samples in the feature space. In order to obtain information bottleneck without labeling information, we apply the contrast learning method so that the two perspectives are “labeled” with each other:
R I B ( θ ) = I θ ( h ( 2 ) ; h ( 1 ) ) β I θ ( x ; h ( 1 ) ) ,
where h(1) and h(2) are vector representations of the generalized perspective one and perspective two, respectively. Maximizing the RIB requires maximizing the first term while minimizing the second term. The second term can be split into two representations according to the mutual information chain rule:
I ( x ; h ( 1 ) ) = I ( x ; h ( 1 ) | h ( 2 ) ) + I ( h ( 2 ) ; h ( 1 ) ) .
The first term in Equation (7) represents the specific information from the input feature x in the vector representation h(1) of viewpoint one, and this term should be minimized, while maximizing the information of the vector representation of viewpoint one and viewpoint two in the second term. The joint action of these two steps reaches the maximum, so rewrite Equation (6) to obtain the generalized self-supervised information bottleneck:
R I B ( θ ) = I θ ( h ( 2 ) ; h ( 1 ) ) β I θ ( x ; h ( 1 ) | h ( 2 ) ) .
The difficulty of contrast learning is constructing positive and negative example samples. In this study, Dropedge is used to expand two different subgraphs so that they are each other’s positive samples, and the graph obtained after random row perturbation of the original adjacency matrix is used as a negative sample. The Dropedge algorithm is a process that expands the original data. In order to obtain two different output graphs G1 and G2, the edges in the original graph are randomly discarded, which is carried out by forcing some of the non-zero elements in the adjacency matrix A to zero at each training cycle with drop rates of p1 and p2, respectively (p1 and p2 can be equal). In this paper, the adjacency matrix after Dropedge is denoted as Adrop, and then, the adjacency matrices corresponding to graphs G1 and G2 are denoted as Adrop1 and Adrop2, which are related to the initial adjacency matrix A as follows:
A drop 1 = A A 1 ,
A drop 2 = A A 2 ,
where A i is a sparse matrix with a size equal to the initial adjacency matrix. Each epoch in the training process performs an independent Dropedge for data augmentation, so that a different Adrop is generated each time, and the data change can be avoided as much as possible. In addition, since the over-smoothing phenomenon is severe in unsupervised learning, the SGIB model uses Dropedge to expand the original data, which can better alleviate the problem of over-smoothing and overfitting in deep networks.

4.2. Encoders

In this paper, the commonly used graph convolutional neural network (GCN) is chosen as the base encoder. As shown in the top and bottom views of Figure 1, we set up separate encoders gθ(•) and gω(•) for each subgraph. The two subgraphs are propagated and mapped by one layer of GCN to obtain their respective node representations as follows:
Z ( 1 ) = σ ( ( D ˜ ( 1 ) ) 1 2 A ˜ ( 1 ) ( D ˜ ( 1 ) ) 1 2 X θ ) ,
Z ( 2 ) = σ ( ( D ˜ ( 2 ) ) 1 2 A ˜ ( 2 ) ( D ˜ ( 2 ) ) 1 2 X ω )
where A ˜ = A + I represents the adjacency matrix with self-loop and D ˜ denotes the degree matrix A ˜ with the diagonal elements as the degrees of the nodes. θ and ω are the two different learnable parameters corresponding to encoders gθ(•) and gω(•), respectively. σ is the nonlinear rectification function PRelu or Relu. The node-level vector representations z(1) and z(2) are clustered into graph-level vector representations employing a readout function φ ( ) : n × d d . Finally, the aggregated graph-level vector representations are mapped between 0 and 1 to obtain the final graph-level vector representations c(1) and c(2) that can be trained and optimized.
Since the SGIB model uses the contrast learning method in self-supervised learning, it requires the selection of positive and negative example samples. We choose the graph-level vector representation c(2) as the positive example sample of the node-level vector representation z(1), and similarly choose the graph-level vector representation c(1) as the positive example sample of the node-level vector representation z(2). The negative example sample is the graph after transforming the original graph node positions. Specifically, a random row transformation of the original graph adjacency matrix is denoted as Ashuf. The perturbed adjacency matrix Ashuf is encoded by the neural network gθ(•) to obtain the representation z(1) as the negative sample of the representation z(1) (the negative sample of viewpoint two is obtained after encoding by the neural network gω(•)). Finally, z(1) + z(2) is returned to the downstream task.

4.3. Training and Optimization

The essence of the information bottleneck is to retain the valuable information to the prediction label while discarding the information redundant to the prediction label, and the information bottleneck in the case of multiple inputs is shown in Figure 2. A and B stand for two inputs, Y stands for the target task (label), 1 stands for the independent features required for the target task in input A but not in input B, 2 stands for the common features required for the target task in input A and B, 3 stands for the independent features required for the target task in input B but not in input A, and 4stands for the information related to the target task in neither input A nor B, i.e., redundant information.
In order to train the encoder end-to-end and learn node representations for agnostic downstream tasks, we use mutual information to measure the dependencies between these three, following the theory of information bottleneck in the multi-input case. The encoder is trained by maximizing the common and independent features of the two subgraphs (i.e., parts 1, 2, and 3) while minimizing the redundant information of the two subgraphs that is irrelevant to the target task (i.e., part 4). For the two input subgraphs, the information bottleneck formula can be reformulated as follows:
R I B ( 1 ) ( θ ) = I θ ( z ( 1 ) ; c ( 2 ) ) β I θ ( z ( 2 ) ; z ( 1 ) |   c ( 2 ) ) ,
R I B ( 2 ) ( ω ) = I ω ( z ( 2 ) ; c ( 1 ) ) β 2 I ω ( z ( 1 ) ; z ( 2 ) |   c ( 1 ) )
θ and ω are the learnable parameters in the two GCNs, respectively. The loss function of the SGIB model is expressed as the average of R(1) and R(2), as shown in Equation (16), β1 and β2 are two hyperparameters:
L l o s s ( θ , ω , β 1 , β 2 ) = I θ ( z ( 1 ) ; c ( 2 ) ) + I ω ( z ( 2 ) ; c ( 1 ) ) 2 β 1 I θ ( z ( 2 ) ; z ( 1 ) | c ( 2 ) ) + β 2 I ω ( z ( 1 ) ; z ( 2 ) | c ( 1 ) ) 2 ,
where Iθ(z(1); c(2)) can be transformed as follows to obtain its lower bound:
I θ ( z ( 1 ) ; c ( 2 ) ) = I θ ω ( c ( 2 ) ; c ( 1 ) z ( 1 ) ) I θ ω ( c ( 2 ) ; c ( 1 ) | z ( 1 ) ) = I θ ω ( c ( 2 ) ; c ( 1 ) z ( 1 ) ) = I θ ω ( c ( 2 ) ; c ( 1 ) ) + I θ ω ( c ( 2 ) ; z ( 1 ) | c ( 1 ) ) I θ ω ( c ( 2 ) ; c ( 1 ) ) .
When the graph representations c1 and c2 do not contain redundant information, i.e., Iθω(c(2);z(1)|c(1)) = 0, the above inequality equation holds. Similarly, the lower bound of Iω(z(2);c(1)) can be deduced:
I ω ( z ( 2 ) ; c ( 1 ) ) I θ ω ( c ( 1 ) ; c ( 2 ) ) .
The second term in Equation (16), where Iθ(z(2);z(1)|c(2)) can be transformed to obtain the upper bound, is as follows:
I θ ( z ( 2 ) ; z ( 1 ) | c ( 2 ) ) = I θ ω ( z ( 2 ) ; z ( 1 ) ) I θ ω ( z ( 2 ) ; z ( 1 ) ; c ( 2 ) ) I θ ω ( z ( 2 ) ; z ( 1 ) ) .
Similarly, the upper bound of Iω(z(1);z(2)|c(1)) can be deduced:
I θ ( z ( 1 ) ; z ( 2 ) | c ( 1 ) ) I θ ω ( z ( 1 ) ; z ( 2 ) ) .
In summary, the loss bounds of the two parts are obtained. The loss function of the model is obtained after adding the balance of weight parameters, as shown in Equation (21):
L l o s s ( θ , ω , α , β ) I θ ω ( c ( 1 ) ; c ( 2 ) ) + β I θ ω ( z ( 1 ) ; z ( 2 ) ) .
From Section 3.2, it is clear that the mutual information cannot be calculated precisely, so its lower bound is usually estimated using some known algorithms. In this paper, the goal of optimization is not to obtain a specific value but to maximize the mutual information, so there are other choices of non-KL divergence, such as the Jensen-Shannon mutual information estimator (JSD) [27] and the noise-contrast estimator (infoNCE) [28]. We chose the JSD estimator because the noise contrast estimator is more sensitive to the number of negative samples. Specifically, the effect of infoNCE decreases as the number of negative samples decreases. The JSD equation is as follows:
I ^ ϖ ( J S D ) ( h i ; c ) = E [ s p ( D ϖ ( h i ; c ) ) ] E ˜ [ s p ( D ϖ ( h ˜ i ; c ) ) ] .
In the above equation, Dϖ is the discriminator, ϖ is the discriminator parameter, ˜ is the negative sample distribution, and s p ( z ) = ln ( 1 + e z ) is the softplus activation function.

5. Experimental Analysis and Results

To demonstrate the effectiveness of the SGIB model, a variety of network analysis experiments, including node classification and node clustering, are performed on three widely used citation networks as well as two large-scale datasets. Numerous experiments show that SGIB outperforms the best available methods. In addition, an in-depth analysis of the model is performed, and the experimental results show that the method can also alleviate the over-smoothing problem to some extent.
The experimental design and results analysis in this section will be carried out in three parts. The first part applies the SGIB model to five public data sets to verify whether the network performance is improved after applying the information bottleneck theory to remove redundant information. The second part verifies whether the over-smoothing problem can be effectively mitigated in the SGIB model with the Dropedge algorithm. The third part verifies whether the improvement in the SGIB model is consistent for different label ratios.

5.1. Datasets and Implementation Details

Datasets: The statistics of the datasets used for node-level tasks are shown in Table 1. Citation Networks. A network consisting of papers and their relationships. Edges are the relationships between papers that include citation relationships, common authors, etc. Nodes are also the number of papers in the dataset, and features are the characteristics of each paper. Coauthor Networks. Coauthor-CS and Coauthor-Phy are two collaborative author networks based on the Microsoft Academic Graph for the KDD Cup 2016 Challenge. In these networks, nodes represent authors who are connected by an edge and if they have co-authored a paper. The node features represent the paper keywords of each author’s paper, and the class labels indicate the most active research areas of each author.
Implementation Details: SGIB follows the experimental setup of previous state-of-the-art methods, and for the node classification task, the experimental setup of DGI is followed, and the average classification accuracy and standard deviation of the test nodes are obtained by a linear classification model after 50 training sessions. For the node clustering task, the learned node representations are clustered using the K-means algorithm and the F1 score (F1), the average normalization (NMI), and the tuning index (ARI) are obtained for an average of 50 runs. SGIB is trained using a one-layer message-passing network with an Adam optimizer and an initial learning rate of 0.001. In addition, early stopping with a patience of 30 is also utilized. The original map sampling probability is chosen from {0.8, 0.9}. Graph representation dimension is chosen from {256, 512}. Local overall mutual information weights and local and local mutual information weights are chosen from {0.5, 0.9, 1.0}. The software and hardware information involved in this experiment is as follows: Operating system: Ubuntu 7.5.0-3Ubuntu1-18.04 CPU: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30 ghz; graphics card: NVIDIA Quadro RTX 5000 16 GB, CUDA 10.2, Pytorch 1.7.0.

5.2. Node Classification

The citation network uses a standard node division ratio. The co-author network selects 30 nodes per class as the training set, 30 nodes per class as the validation set, and the remaining nodes as the test set. Table 2 reports the mean node classification accuracy of our method and other baselines. The supervised models include MLP, LogReg, label propagation (LP) [29], Chebyshev [30], GCN, graph attention networks (GAT), and mixed model networks (MoNet) [31]. The unsupervised models include DGI, GMI, GRACE, GCA, Graph InfoClust (GIC) [32], MVGRL.
Table 2 shows that compared to the six state-of-the-art unsupervised methods, the SGIB model improves in all five datasets and achieves the best results in four of them. The improvement is 0.5 percentage points on Cora, 0.8 percentage points on Pubmed, 0.9 percentage points on Coauthor-CS, and 0.6 percentage points on Coauthor-Phy. The most representative unsupervised model DGI and SGIB were selected to compare the distribution of experimental results. For each model, 20 experimental results are taken for analysis, as shown in Figure 3. As can be seen from the figure, SGIB generally improves the experimental results, which is consistent with DGI. In this study, we believe that the main reason for the performance improvement is the minimization of mutual information between the node-level vector representations of view 1 and view 2. Therefore, the node representation maximizes the extraction of the most important universal and unique information in the graph, while eliminating redundant information.
According to the experimental results, the SGIB model works better on large data sets, which may be because large data sets contain more data diversity and more dispersed features, so that more redundant information is generated in the encoding process. Thus, the model proposed in this paper shows stronger competitiveness when it can remove the task-irrelevant information from the data.
Compared with SGIB, unsupervised learning models such as DGI, GMI, and MVGRL only use mutual information to maximize learning graph representations, i.e., only the most dominant information in the graph is learned, so they are slightly weaker than SGIB models in terms of classification performance. Moreover, in the case of no label information, the proposed method in this study can still show experimental results that rival or even slightly outperform supervised learning models such as GAT and GCN, which is because the high quality and rich information in the graph learned by the SGIB model is sufficient to support the classification task, which will guarantee the performance in the downstream task.

5.3. Node Clustering

For the node clustering task, in addition to the four unsupervised methods mentioned in Section 5.2, seven models were selected for comparison, including K-means, spectral clustering, deep representation for graph Ccustering (DNGR) [33], relational topic models (RTM) [34], robust multi-view spectral clustering (RMSC) [35], text-associated DeepWalk (TADW) [36], and variational graph auto-encoders (VGAE) [37]. The clustering results of Cora, Citeseer, and Pubmed are shown in Table 3.
From the perspective of quantitative analysis, the performance of the SGIB model outperforms the other models in almost all metrics. Moreover, from the perspective of qualitative analysis, as shown in Figure 4 based on the visualized t-SNE plots on the three datasets, the SGIB model shows more discernible clusters, and the separability and tightness of its clustering results are more evident. The feasibility and rationality of the SGIB model can be seen from both qualitative and quantitative perspectives.

5.4. Ablation Experiment

In order to verify the role played by information bottleneck in the SGIB model, ablation experiments are also conducted in this paper. The experiments are conducted to remove the effect of the Dropedge algorithm by performing random edge deletion operations on two other unsupervised models, and verifying whether a higher quality vector representation is possible after removing redundant information. This study selected the extant classical unsupervised model DGI, GMI was combined with the Dropedge algorithm for the experiments, and the setup of the ablation experiments followed that of DGI. The experimental results are shown in Table 4.
From the above experimental results, we can see that the SGIB model still shows competitiveness after removing the effect of Dropedge. For example, after combining the DGI model with the Dropedge algorithm on the Pubmed dataset, there is a significant performance improvement of 2.7 percentage points for the node classification task, and it can be inferred that the random deletion of edges in the network can remove some redundant information to a certain extent, which makes the model performance improve. The SGIB model also has a nearly 1 percentage point improvement compared to the DGI + Dropedge model, which is entirely due to the application of information bottleneck and proved to be effective in removing redundant information from the representation of the data, thus improving the performance of the model. Moreover, the experimental results on the large-scale dataset Pubmed show more significant improvement than the other two small-scale datasets, for the larger the dataset and the more redundant information exists, the more significant the performance improvement of the SGIB model.

5.5. Node Classification with Various Depths

In this subsection, the performance of the SGIB model with other mainstream unsupervised models for node classification is mainly verified when the model depth is gradually increased. Compared with the DGI, GMI, and MVGRL models, the SGIB model is more useful for mitigating network over-smoothing. To ensure the fairness of the experiments, the same set of hyperparameters and the same discard rate p are applied, and the network depth is from layer 1 to layer 32. The experimental results are shown in Table 5.
The experimental comparison revealed that for the two-layer network Cora and Citeseer datasets generally did not work as well as the GMI model, which may be due to the original setting of the GMI model as a two-layer network. As the depth of the model network increases, the SGIB model presents optimal results for all results from 4 to 32 layers. In particular, it has a significant performance improvement of 9.5 percentage points on the Pubmed dataset, from which it is inferred that the SGIB model may be better at mitigating over-smoothing on large datasets.

5.6. Limited Labeled Training

This section focuses on the performance comparison of the SGIB model with other mainstream graph neural network models (GCN, GAT, DGI, MVGRL) on node classification tasks at low labeling rates. Thus, the improvement of SGIB model performance is verified to be consistent under different labeling rates. The labeling rates were set to 1%, 2%, and 3% on the Cora and Citeseer datasets, and 0.03%, 0.05%, and 0.1% on the Pubmed dataset.
From the above experimental results, as shown in Figure 5, it can be concluded that the SGIB model can achieve excellent results in the case of different labeling rates as well, and the improvement is consistent in the face of different labeling rates. In all three data sets, the lower the labeling rate, the more obvious the improvement effect is. For example, on the Cora dataset with a 1% labeling rate, there is a 4.3 percentage point improvement, which indicates that the SGIB model relies more on the self-supervised approach to obtain information about the nodes themselves.

6. Conclusions

In this paper, we propose a graph network representation model based on information bottleneck to address the problem that the neighboring node features in the input graph may contain useless information, and we implement the self-supervised SGIB algorithm using the contrast learning method. The SGIB algorithm introduces the information bottleneck theory so that the vector representation learned by the encoder from the graph structure data contains minimum and sufficient input information. In this study, experiments were conducted on five datasets, and the experimental results show that (1) the SGIB algorithm removes redundant information based on the extraction of common and independent features of multiple inputs, making the encoder less susceptible to noisy data; (2) the Dropedge algorithm for contrast learning can alleviate the overfitting problem to a certain extent; (3) the performance improvement of the SGIB algorithm compared with the state-of-the-art supervised learning algorithm and an unsupervised learning algorithm is verified in a large number of experiments, and it is found that the SGIB algorithm performs better on large-scale data sets.

Author Contributions

Conceptualization, J.G.; methodology, L.Y.; software, Z.Z.; validation, Z.Z.; writing-original draft preparation, Z.Z.; writing-review and editing, W.Z.; supervision, Y.Z.; funding acquisition, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation of China under Grant 61972442; in part by the Key Research and Development Project of Hebei Province of China under Grant 20350802D and 20310802D; in part by the Natural Science Foundation of Hebei Province of China under Grant F2020202040; in part by the Natural Science Foundation of Tianjin of China under Grant 20JCYBJC00650.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  2. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
  3. Hamilton, W.L.; Ying, R.; Leskovec, J. Inductive representation learning on large graphs. In Proceedings of the 30th Advances in Neural Information Processing Systems Conference, Long Beach, CA, USA, 4–9 December 2017; pp. 1024–1034. [Google Scholar]
  4. Pan, S.; Wu, J.; Zhu, X.; Long, G.; Zhang, C. Task sensitive feature exploration and learning for multitask graph classification. IEEE Trans. Cybern. 2016, 47, 744–758. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Chen, D.; Nie, M.; Zhang, H.; Wang, Z.; Wang, D. Network embedding algorithm taking in variational graph autoencoder. Mathematics 2022, 10, 485. [Google Scholar] [CrossRef]
  6. Pan, S.; Hu, R.; Long, G.; Jiang, J.; Yao, L.; Zhang, C. Adversarially regularized graph autoencoder for graph embedding. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 2609–2615. [Google Scholar]
  7. Zhang, Y.; Pal, S.; Coates, M.; Ustebay, D. Bayesian graph convolutional neural networks for semi-supervised classification. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; pp. 5829–5836. [Google Scholar]
  8. Qiu, J.; Tang, J.; Ma, H.; Dong, Y.; Wang, K.; Tang, J. Deepinf: Social influence prediction with deep learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; pp. 2110–2119. [Google Scholar]
  9. Bai, Y.; Ding, H.; Bian, S.; Chen, T.; Sun, Y.; Wang, W. Simgnn: A neural network approach to fast graph similarity computation. In Proceedings of the 12th ACM International Conference on Web Search and Data Mining, Melbourne, VIC, Australia, 11–15 February 2019; pp. 384–392. [Google Scholar]
  10. Sun, F.Y.; Hoffmann, J.; Verma, V.; Tang, J. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  11. Liben-Nowell, D.; Kleinberg, J. The link-prediction problem for social networks. J. Assoc. Inf. Sci. Technol. 2007, 58, 1019–1031. [Google Scholar] [CrossRef] [Green Version]
  12. Zhang, M.H.; Chen, Y.X. Link prediction based on graph neural networks. In Proceedings of the 31st Advances in Neural Information Processing Systems Conference, Montréal, QC, Canada, 3–8 December 2018; pp. 5171–5181. [Google Scholar]
  13. Velickovic, P.; Fedus, W.; Hamilton, W.L.; Liò, P.; Bengio, Y.; Hjelm, R.D. Deep graph infomax. In Proceedings of the 7th International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
  14. Peng, Z.; Huang, W.; Luo, M.; Zheng, Q.; Rong, Y.; Xu, T.; Huang, J. Graph representation learning via graphical mutual information maximization. In Proceedings of the 20th Web Conference 2020, Taipei, Taiwan, 20–24 April 2020; pp. 259–270. [Google Scholar]
  15. Hassani, K.; Khasahmadi, A.H. Contrastive multi-view representation learning on graphs. In Proceedings of the 37th International Conference on Machine Learning, Virtual Event, 13–18 July 2020; Volume 119, pp. 4116–4126. [Google Scholar]
  16. Zhu, Y.; Xu, Y.; Yu, F.; Liu, Q.; Wu, S.; Wang, L. Deep graph contrastive representation learning. arXiv 2020, arXiv:2006.04131. [Google Scholar]
  17. Zhu, Y.; Xu, Y.; Yu, F.; Liu, Q.; Wu, S.; Wang, L. Graph contrastive learning with adaptive augmentation. In Proceedings of the 21th Web Conference 2021, Ljubljana, Slovenia, 19–23 April 2021; pp. 2069–2080. [Google Scholar]
  18. Yang, L.; Wu, F.; Zheng, Z.; Niu, B.; Gu, J.; Wang, C.; Cao, X.; Guo, Y. Heterogeneous graph information bottleneck. In Proceedings of the 30th International Joint Conference on Artificial Intelligence, Montreal, QC, Canada, 19–27 August 2021; pp. 1638–1645. [Google Scholar]
  19. Yu, J.; Xu, T.; Rong, Y.; Bian, Y.; Huang, J.; He, R. Recognizing predictive substructures with subgraph information bottleneck. arXiv 2021, arXiv:2103.11155. [Google Scholar] [CrossRef] [PubMed]
  20. Yu, J.; Cao, J.; He, R. Improving subgraph recognition with variational graph information bottleneck. arXiv 2021, arXiv:2112.09899. [Google Scholar]
  21. Belghazi, M.I.; Baratin, A.; Rajeswar, S.; Ozair, S.; Bengio, Y.; Courville, A.; Hjelm, R.D. Mutual information neural estimation. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; Volume 80, pp. 530–539. [Google Scholar]
  22. Achille, A.; Soatto, S. Emergence of invariance and disentanglement in deep representations. J. Mach. Learn. Res. 2018, 19, 1947–1980. [Google Scholar]
  23. Tishby, N.; Pereira, F.C.; Bialek, W. The information bottleneck method. arXiv 2000, arXiv:physics/0004057. [Google Scholar]
  24. Alemi, A.A.; Fischer, I.; Dillon, J.V.; Murphy, K. Deep variational information bottleneck. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
  25. Federici, M.; Dutta, A.; Forré, P.; Kushman, N.; Akata, Z. Learning robust representations via multi-view information bottleneck. In Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  26. Rong, Y.; Huang, W.; Xu, T.; Huang, J. Dropedge: Towards deep graph convolutional networks on node classification. In Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  27. Nowozin, S.; Cseke, B.; Tomioka, R. f-gan: Training generative neural samplers using variational divergence minimization. In Proceedings of the 29th Advances in Neural Information Processing Systems Conference, Barcelona, Spain, 5–10 December 2016; pp. 271–279. [Google Scholar]
  28. Oord, A.; Li, Y.; Vinyals, O. Representation learning with contrastive predictive coding. arXiv 2018, arXiv:1807.03748. [Google Scholar]
  29. Zhu, X.; Ghahramani, Z.; Lafferty, J.D. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International Conference on Machine Learning, Washington, DC, USA, 21–24 August 2003; pp. 912–919. [Google Scholar]
  30. Defferrard, M.; Bresson, X.; Vandergheynst, P. Convolutional neural networks on graphs with fast localized spectral filtering. In Proceedings of the 29th Advances in Neural Information Processing Systems Conference, Barcelona, Spain, 5–10 December 2016; pp. 3837–3845. [Google Scholar]
  31. Monti, F.; Boscaini, D.; Masci, J.; Rodola, E.; Svoboda, J.; Bronstein, M.M. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5424–5434. [Google Scholar]
  32. Mavromatis, C.; Karypis, G. Graph infoclust: Maximizing coarse-grain mutual information in graphs. In Advances in Knowledge Discovery and Data Mining, Proceedings of the 25th Pacific-Asia Conference on Knowledge Discovery and Data Mining, Virtual Event, 11–14 May 2021; Karlapalem, K., Cheng, H., Ramakrishnan, N., Agrawal, R.K., Reddy, P.K., Srivastava, J., Chakraborty, T., Eds.; Springer: Cham, Switzerland, 2021; Volume 12712, pp. 541–553. [Google Scholar]
  33. Cao, S.; Lu, W.; Xu, Q. Deep neural networks for learning graph representations. In Proceedings of the 30th AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; pp. 1145–1152. [Google Scholar]
  34. Chang, J.; Blei, D. Relational topic models for document networks. In Proceedings of the 12th International Conference on Artificial Intelligence and Statistics, Clearwater Beach, FL, USA, 16–18 April 2009; Volume 5, pp. 81–88. [Google Scholar]
  35. Xia, R.; Pan, Y.; Du, L.; Yin, J. Robust multi-view spectral clustering via low-rank and sparse decomposition. In Proceedings of the 28th AAAI Conference on Artificial Intelligence, Québec City, QC, Canada, 27–13 July 2014; pp. 2149–2155. [Google Scholar]
  36. Yang, C.; Liu, Z.; Zhao, D.; Sun, M.; Chang, E. Network representation learning with rich text information. In Proceedings of the 24th International Joint Conference on Artificial Intelligence, Buenos Aires, Argentina, 25–31 July 2015; pp. 2111–2117. [Google Scholar]
  37. Kipf, T.N.; Welling, M. Variational graph auto-encoders. arXiv 2016, arXiv:1611.07308. [Google Scholar]
Figure 1. Overall structure of SGIB.
Figure 1. Overall structure of SGIB.
Symmetry 14 00657 g001
Figure 2. Information bottleneck with multi-input. (A) Input data for view one; (B) Input data for view two; (Y) The target tasks or labels.
Figure 2. Information bottleneck with multi-input. (A) Input data for view one; (B) Input data for view two; (Y) The target tasks or labels.
Symmetry 14 00657 g002
Figure 3. Box-plot of DGI and SGIB.
Figure 3. Box-plot of DGI and SGIB.
Symmetry 14 00657 g003
Figure 4. The visualization of the embeddings obtained from DGI, GMI, MVGRL and SGIB. (a) DGI for Cora; (b) GMI for Cora; (c) MVGRL for Cora; (d) SGIB for Cora; (e) DGI for Citeseer; (f) GMI for Citeseer; (g) MVGRL for Citeseer; (h) SGIB for Citeseer; (i) DGI for Pubmed; (j) GMI for Pubmed; (k) MVGRL for Pubmed; (l) SGIB for Pubmed.
Figure 4. The visualization of the embeddings obtained from DGI, GMI, MVGRL and SGIB. (a) DGI for Cora; (b) GMI for Cora; (c) MVGRL for Cora; (d) SGIB for Cora; (e) DGI for Citeseer; (f) GMI for Citeseer; (g) MVGRL for Citeseer; (h) SGIB for Citeseer; (i) DGI for Pubmed; (j) GMI for Pubmed; (k) MVGRL for Pubmed; (l) SGIB for Pubmed.
Symmetry 14 00657 g004aSymmetry 14 00657 g004b
Figure 5. Node classification results with limited training labels on Cora, Citeseer, and Pubmed.
Figure 5. Node classification results with limited training labels on Cora, Citeseer, and Pubmed.
Symmetry 14 00657 g005aSymmetry 14 00657 g005b
Table 1. Statistics of the datasets used in experiments.
Table 1. Statistics of the datasets used in experiments.
DatasetsNodesEdgesFeaturesClassesTrain/Val/Test Nodes
Cora2708542914337140/500/1000
Citeseer3327473237036120/500/1000
Pubmed19,71744,338500360/500/1000
Coauthor-CS18,33381,894680515450/450/17,433
Coauthor-Phy34,493247,96284155150/150/34,193
Table 2. Accuracies in percent on node classification.
Table 2. Accuracies in percent on node classification.
MethodsInputCoraCiteseerPubmedCoauthor-CSCoauthor-Phy
SupervisedMLPX,Y58.2 ± 2.159.1 ± 2.370.0 ± 2.188.3 ± 0.788.9 ± 1.1
LogRegX,A,Y57.1 ± 2.361.0 ± 2.264.1 ± 3.186.4 ± 0.986.7 ± 1.5
LPA,Y68.0 ± 0.245.3 ± 0.263.0 ± 0.574.3 ± 0.090.2 ± 0.5
ChebyshevX,A,Y81.2 ± 0.569.8 ± 0.574.4 ± 0.391.5 ± 0.092.1 ± 0.3
GCNX,A,Y81.5 ± 0.270.3 ± 0.379.0 ± 0.491.8 ± 0.192.6 ± 0.7
GATX,A,Y83.0 ± 0.772.5 ± 0.779.0 ± 0.390.5 ± 0.791.3 ± 0.6
MoNetX,A,Y81.3 ± 1.371.2 ± 2.078.6 ± 2.390.8 ± 0.692.5 ± 0.9
UnsupervisedDGIX,A81.7 ± 0.671.5 ± 0.777.3 ± 0.690.0 ± 0.391.3 ± 0.4
GMIX,A80.7 ± 0.771.1 ± 0.278.0 ± 1.091.0 ± 0.0OOM
GRACEX,A80.0 ± 0.471.7 ± 0.679.5 ± 1.190.1 ± 0.892.3 ± 0.6
GCAX,A80.5 ± 0.571.3 ± 0.478.6 ± 0.691.3 ± 0.493.1 ± 0.3
GICX,A81.7 ± 1.571.9 ± 1.477.3 ± 1.989.4 ± 0.493.1 ± 0.7
MVGRLX,A82.8 ± 1.072.7 ± 0.579.6 ± 0.891.0 ± 0.693.2 ± 1.0
SGIBX,A83.3 ± 0.771.7 ± 0.880.4 ± 0.692.2 ± 0.593.8 ± 0.8
Table 3. Node clustering result in Micro F1, NMI and ARI.
Table 3. Node clustering result in Micro F1, NMI and ARI.
MethodsCoraCiteseerPubmed
F1NMIARIF1NMIARIF1NMIARI
K-means0.3680.3210.2300.4090.3050.2790.1950.0010.002
Spectral0.3180.1270.0310.2990.0560.0100.2710.0420.002
DeepWalk0.3920.3270.2430.2700.0880.0920.6700.2790.299
DNGR0.3400.3180.1420.3000.1800.0440.4670.1550.054
RTM0.3070.2300.1690.3420.2390.2030.4440.1940.148
RMSC0.3310.2550.0900.3200.1390.0490.4210.2550.222
TADW0.4810.4410.3320.4140.2910.2280.3350.0010.001
GAE0.5950.4290.3470.3270.1760.1240.6600.2770.279
VGAE0.6090.4360.3460.3080.1560.0930.6340.2290.213
DGI0.7070.5440.4720.7140.4790.4850.6670.3070.277
GMI0.7010.5420.4950.6670.4190.4180.6440.2390.225
SGIB0.7140.5460.5050.7160.4870.4870.6730.3070.279
Table 4. Ablation experiments.
Table 4. Ablation experiments.
MethodsCoraCiteseerPubmed
DGI82.371.876.8
DGI + Dropedge82.972.079.5
GMI80.771.178.0
GMI + Dropedge81.969.778.2
SGIB83.371.780.4
Table 5. Node classification AC with various depths.
Table 5. Node classification AC with various depths.
DataMethodsDepths
12481632
CoraDGI82.3079.3673.1021.8720.0116.43
DGI + D82.9079.0072.6036.4821.6016.05
GMI80.7074.0637.8116.2216.06
GMI + D81.9577.0738.0115.7516.15
MVGRL82.8081.7478.2028.3822.0416.82
SGIB83.3280.8079.0765.5423.8020.86
CiteseerDGI71.8070.2462.3428.1820.3917.10
DGI + D72.0270.5164.8532.5921.2517.07
GMI71.1058.8038.1820.7017.06
GMI + D69.7054.7639.6024.4119.83
MVGRL72.7069.2860.2952.9633.0218.32
SGIB71.7371.2967.5058.1135.5120.11
PubmedDGI76.8073.8065.2350.5645.2134.97
DGI + D79.4871.8364.9451.2041.8434.89
GMI78.0075.5061.4144.3634.67
GMI + D78.1874.6261.3236.3934.81
MVGRL79.6075.3067.7836.2834.4034.12
SGIB80.4480.1075.5061.5845.6043.66
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gu, J.; Zheng, Z.; Zhou, W.; Zhang, Y.; Lu, Z.; Yang, L. Self-Supervised Graph Representation Learning via Information Bottleneck. Symmetry 2022, 14, 657. https://doi.org/10.3390/sym14040657

AMA Style

Gu J, Zheng Z, Zhou W, Zhang Y, Lu Z, Yang L. Self-Supervised Graph Representation Learning via Information Bottleneck. Symmetry. 2022; 14(4):657. https://doi.org/10.3390/sym14040657

Chicago/Turabian Style

Gu, Junhua, Zichen Zheng, Wenmiao Zhou, Yajuan Zhang, Zhengjun Lu, and Liang Yang. 2022. "Self-Supervised Graph Representation Learning via Information Bottleneck" Symmetry 14, no. 4: 657. https://doi.org/10.3390/sym14040657

APA Style

Gu, J., Zheng, Z., Zhou, W., Zhang, Y., Lu, Z., & Yang, L. (2022). Self-Supervised Graph Representation Learning via Information Bottleneck. Symmetry, 14(4), 657. https://doi.org/10.3390/sym14040657

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop