Speaker: Michael Perlmutter (UCLA)
Abstract: The field of Geometric Deep Learning aims to extend the success of Convolutional Neural Networks to data sets that lack a Euclidean, grid-like structure and are more naturally modeled as manifolds or (possibly directed). A major advance in this field has been the rise of graph convolutional networks which extend the success of CNNs to the graph domain by defining convolution either as a localized averaging operation or via the eigendecomposition of the graph Laplacian.
Despite the success of these networks, standard graph convolutional networks also have some limitations. They have difficulty (i) incorporating high-frequency information and long-range dependencies. (ii) handling directed graphs. In my talk, I will talk about how we can overcome these difficulties via (i) the graph and manifold scattering transforms which capture high-frequency information and long-range dependencies via wavelet filters, (ii) MagNet, a directed graph neural net that encodes directional information via a Hermitian matrix known as the magnetic Laplacian.