We focus on the active symmetries of GNNs, and show a bias-variance tradeoff controlled by the choice of symmetry.
We quantify which distances MPNNs induce, leading to a fine-grained understanding of their expressivity.
We propose spectral-inspired GNNs that exploit the advantages from both global and local methods.
A simple generative model in which spectral graph embedding for subsequent inference succeeds whereas unsupervised GCN fails.
We propose a new method to make NLP models more parameter efficient by storing and retrieving knowledge in an external knowledge graph.
An interative visualization showing results of topic modelling on 33k Wikipedia articles
An interactive jupyter notebook implementing live chatbot using N-gram blocking
A study of three matrix imputation methods on different types of missing data patterns