Latest developments in Graph Neural Networks (GNNs)

Graph Neural Networks (GNNs) has seen rapid development recently. This blog provides a quick introduction to the methodology and summarizes some of the latest research on GNNs from top AI conferences.

By
Crossminds
on
October 11, 2020
Category:
Research Spotlights

Neural networks have revolutionized the field of machine learning for different data structures from images and sound clips to text and tabular data. In this blog, we will be exploring the latest applications of neural networks on Graphs. The idea of Graph Neural Networks (GNNs) has been around since 2005, but like most other Neural Network methods, it has seen rapid development only recently. 

If you’d like to refresh the basic math related to neural networks before proceeding; please check out this article.)

What is a Graph?

A graph is a datatype containing nodes (vertices) that connect to each other through edges, which can be directed or undirected. Each node has a set of features (which could represent properties of nodes or could be one-hot-encoded information), and the edges define relations between nodes.

A graph representing relations between languages using directed edge
A graph representing relations between languages using directed edges [source]

What is a Graph Neural Network?

In a typical GNN, Message Passing is performed between nearby nodes through the edges. Intuitively, the message is a neural encoding of the information that is passed from one node to its connected neighbors. At any layer, the representation of a node is computed by aggregating the messages from all its neighbors to the current node. After multiple rounds of message passing, one can obtain a vector representation for each node, which can be interpreted as an embedding representation describing not only the node feature information but also the neighborhood graph structure around this node. See the image below for an illustration of the process. To learn more about the mathematics behind GNNs, check out this article.

Illustration of GNN model architecture. Suppose that we run GNN to compute the representation of node A using 2-layer GNN, the computation is shown on the right, where every directed arrow between 2 nodes indicates a message that is passed from the source to the target. [Rex Ying et al. 2018] 
Illustration of GNN model architecture. Suppose that we run GNN to compute the representation of node A using 2-layer GNN, the computation is shown on the right, where every directed arrow between 2 nodes indicates a message that is passed from the source to the target. [Rex Ying et al. 2018


A graph can be used to depict numerous data from social networks and images to chemical structures, neurons in the human brain and even a regular, fully connected neural network. That’s what makes GNNs so useful!


We will now be exploring the latest developments in GNNs from a few interesting research papers below. 

XGNN: Towards Model-Level Explanations of Graph Neural Networks

One of the major problems with using neural networks is that they are used as black boxes and  It is challenging to get interpretable results. They are unlikely to be used for critical situations due to the lack of reasons behind a decision. Current methods use gradients, perturbations, and activations generated by the neural network during the forward pass for interpreting its outputs. Still, it is not a very effective method and extremely difficult for GNNs. 

This paper published at KDD 2020 addresses this problem using a novel method, XGNN, by combining Generative methods and Reinforcement Learning. This method can be used to obtain information to understand, verify, and even improve the trained GNNs. Watch the spotlight conference talk below for a quick overview of the paper:

[KDD 2020] XGNN: Towards Model-Level Explanations of Graph Neural Networks
Illustrations of XGNN for graph interpretation via graph generation [Hao Yuan et al. Source


Neural Dynamics on Complex Networks

This paper tackles the challenge of capturing continuous-time dynamics in complex networks. The authors propose a combination of ODEs (ordinary differential equations) and GNNs to effectively model the system structure and dynamics, so we can better understand, predict, and control complex networks. Watch the conference talk below for a quick paper overview:

[KDD 2020] Neural Dynamics on Complex Networks
Heat diffusion on different networks [Chengxi Zang & Fei Wang. Source


Competitive Analysis for Points of Interest

This next paper by Baidu Research is a practical application of GNNs to model the consumer choices among adjacent business entities providing similar products/services (referred to as Points of Interest, POIs). To predict the competitive relationship among POIs, it develops a GNN-based deep learning framework, DeepR, with an integration of heterogeneous user behavior data, business reviews, and map search data of POIs. Check out the spotlight talk below to learn more:

[KDD 2020] Competitive Analysis for Points of Interest
Illustration of the proposed DeepR framework [Shuangli Li et al. source]

Comprehensive Information Integration Modeling Framework for Video Titling

This paper by Alibaba Group aims to leverage massive product review videos created by consumers to better understand their preferences and recommend relevant videos to potential customers. One major problem with these videos is that they are not labeled properly. The paper thus proposes a two-step method, which comprises both granular-level interaction modeling and abstraction-level story-line summarization through GNNs, to create video titles based on a host of factors. Learn more about this paper in the spotlight talk below:

[KDD 2020] Comprehensive Information Integration Modeling Framework for Video Titling
Gavotte: Graph Based Video Title Generator [Shengyu Zhang et al. source]

Knowing Your FATE: Explanations for User Engagement Prediction on Social Apps

This paper by the Snapchat team explores interesting user engagement on social media applications using GNNs. It proposes an end-to-end neural framework to predict user engagement based on a set of factors covering the number and quality of friends, relevance of content posted by a user, user actions, and temporal factors. This is one of the most intuitive applications of GNNs. Learn more about the paper from the oral presentation below :

[KDD 2020] Knowing your FATE: Explanations for User Engagement Prediction on Social Apps
The overall framework of FATE [Xianfeng Tang et al. source]


You can find more recent research presentations in our GNN collection.


Sign up with Crossminds.ai to get personalized recommendations of the latest tech research videos!

Join Crossminds Now!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form
Tags:
Crossminds

Crossminds.ai is a personalized research video platform for tech professionals. We aim to empower your growth with the latest and most relevant research, industry, and career updates.