Graph Neural Networks (GNNs) is a subtype of neural networks that operate on data structured as graphs.
In an article covered earlier on Geometric Deep Learning, we saw how image processing, image classification, and speech recognition are represented in the Euclidean space. Graphs are non-Euclidean and can be used to study and analyse 3D data.
Sign up for your weekly dose of what's up in emerging technology.
The Rise of GNN
GNN involves converting non-structured data like images and text into graphs to perform analysis. A graph is usually a representation of a data structure with two components — Vertices (V) and Edges (E), which is generally put as; G = Ω (V, E). GNNs can be directly applied to graphs to provide an easy way to do node-level, edge-level, and graph-level prediction tasks. It has mainly been used in modelling physics systems, predicting interfaces, linear molecular fingerprints, etc.
GNNs can capture the dependence of graphs via message passing between the nodes of graphs. Unlike standard neural networks, GNNs retain a state representing information from their neighbourhood with arbitrary depth.
It has also been used to advance research in network architects, optimisation techniques, and parallel computation. GNNs such as graph convolutional network (GCN), graph attention network (GAT), gated graph neural network (GGNN) have demonstrated outstanding performance on many tasks.
GNNs can do what CNNs failed to do. GNN are inherently rotation and translation invariant since there is no notion of rotation or translation in graphs. Applying CNNs to graphs can be challenging due to the graph’s arbitrary size and complex topology.
GNNs can handle graph input effectively, unlike standard neural networks. According to researchers, to present a graph completely, we should traverse all the possible orders as the input of the model like CNNs and RNNs, which is very redundant when computing. To solve this, GNNs should propagate on each node, respectively, ignoring the input order of nodes.
In standard neural networks, the dependency information is only regarded as the feature of nodes. However, GNNs can propagate the graph structure instead of using it as part of the features.
GNNs can offer the human brain’s reasoning process, unlike other neural network methods. There are many variants of graphs currently being used — Directed Graphs, Heterogeneous Graphs, Dynamic Graphs, Graphs with Edge information, to name a few.
GNNs are already making waves in fields such as social influence prediction and healthcare domains. These models are anticipated to play a pivotal role in knowledge discovery, recommendations and sentiment analysis in the coming years.
Computer vision is one of the popular areas where CNNs have been extensively used. GNNs are quite handy in image classification problems. It is also being used in enhancing human-object interaction, few-shot image classification etc.
Life sciences is another application area for GNNs. Physics, chemistry and drug research have benefited from representing the interactions between particles or molecules as a graph. For instance, researchers from DeepMind have applied GNNs to simulate the dynamics of complex systems of particles such as water or sand. It can be used to study the dynamics of the whole system. In drug discovery, graphs represent interactions between complex structures such as proteins, mRNA, metabolites etc.
Companies are exploring GNNs in recommendation systems where graphs are being used in user interactions with products. For instance, UberEats uses the GraphSage network to recommend food items and restaurants to users. Graph embeddings are also popularly used for recommendation.
Another area is combinatorial optimisation problems with applications in finance, logistics, energy etc. Most of the problems in this field are formulated with graphs. For instance, the Google Brain team used GNN to optimise a chip block’s power, area, and performance for new hardware such as Google’s TPU.
NLP is another area where graphs are heavily used. GNNs are applied to many NLP problems such as text classification, exploiting semantics in machine translation, user geolocation, relation extraction, or question answering.
GNNs are also used for forecasting traffic movement, volume or the density of roads. A traffic network can be considered as a spatial-temporal graph where the nodes are sensors installed on roads. The distance between pairs of nodes measures the edges.
According to a paper titled Graph Neural Networks: A Review of Methods and Applications, below are a few challenges with GNNs.
GNNs are dynamic graphs, and it can be a challenge to deal with graphs with dynamic structures. While static graphs are stable and can be modelled feasibly, dynamic graphs may challenge changing structures.
Another challenge is scalability. Applying embedding methods in social networks or recommendation systems has been a significant problem for almost all graph embedding algorithms, including GNNs. Scaling up GNN is problematic because it can be computationally consuming.
GNNs are also difficult to apply in non-structural scenarios. Finding the best graph generation approach for GNNs is currently a challenge.
GNNs have made significant contributions over the last few years in machine learning tasks, thanks to advances in model flexibility and training algorithms. If the challenges listed above are dealt with effectively, GNNs can become the next big technological innovation of the decade.