Paper Recap -- Towards Interpretable Sparse Graph Representation Learning with Laplacian Pooling

Published: 03.20.2020

Paper Author: Noutahi et al.

Paper: arXiv

TLDR

  • Heirarchical graph pooling approach meant to improve the interpretability of graph neural networks.
  • LaPool iteratively coarsens (collapses nodes to single one) graphs in order to allow for message passing updates to reach nodes that are distant in the original graph.
  • The main intuition is to use a node-level smoothness measure to determine which nodes to use as pooling centers.
  • Models which use this pooling layer outperform other pooling-based GNNs.

Methods

  • A graph Laplacian is \(\mathbf{L} = \mathbf{D} - \mathbf{A}\) where \(\mathbf{D}\) is a diagonal matrix storing the degree of each node in the graph, and \(A\) is a standard adjacency matrix.
  • Define a smoothness measure
  • \(s(\mathbf{f}) = (\mathbf{f}^T L \mathbf{f}) = \frac{1}{2} \sum_{i,j}^{n} \mathbf{A}_{i,j} (f_i - f_j)^2\).
  • where \(\mathbf{f}\) can be thought of as a vector signals \(f_i\) coming from each node.
  • Therefore \(s\) gives the difference in signal between each node and its neighbors. i.e. how much it stands out from its neighbors.