Stochastic Blockmodels meet Graph Neural Networks

Nikhil Mehta, Lawrence Carin, Piyush Rai

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Scopus citations

Abstract

Stochastic blockmodels (SBM) and their variants, e.g., mixed-membership and overlapping stochastic blockmodels, are latent variable based generative models for graphs. They have proven to be successful for various tasks, such as discovering the community structure and link prediction on graph-structured data. Recently, graph neural networks, e.g., graph convolutional networks, have also emerged as a promising approach to learn powerful representations (embeddings) for the nodes in the graph, by exploiting graph properties such as locality and invariance. In this work, we unify these two directions by developing a sparse variational autoencoder for graphs, that retains the interpretability of SBMs, while also enjoying the excellent predictive performance of graph neural nets. Moreover, our framework is accompanied by a fast recognition model that enables fast inference of the node embeddings (which are of independent interest for inference in SBM and its variants). Although we develop this framework for a particular type of SBM, namely the overlapping stochastic blockmodel, the proposed framework can be adapted readily for other types of SBMs. Experimental results on several benchmarks demonstrate encouraging results on link prediction while learning an interpretable latent structure that can be used for community discovery.
Original languageEnglish (US)
Title of host publication36th International Conference on Machine Learning, ICML 2019
PublisherInternational Machine Learning Society (IMLS)[email protected]
Pages7849-7857
Number of pages9
ISBN (Print)9781510886988
StatePublished - Jan 1 2019
Externally publishedYes

Fingerprint

Dive into the research topics of 'Stochastic Blockmodels meet Graph Neural Networks'. Together they form a unique fingerprint.

Cite this