Blog

What are Graph Neural Networks, and what do they mean for retailers?

In a year dominated by the topic of artificial intelligence (AI), it’s only natural that some AI-related terms would also begin making their way into the mainstream. Large Language Models, Natural Language Processing, and deep learning are all subjects no longer confined solely to academic or data science environments – and Graph Neural Networks (GNNs), the subject of this post, are beginning to gain traction too.

If you haven’t come across GNNs before, or have heard of them but aren’t sure exactly what they are, then there are three key things to know:

  • GNNs are a subclass of neural networks, which are themselves founded on the idea that computers can be taught to handle data in a similar way to the human brain.
  • GNNs are neural networks that have been specifically adapted to work with graphs.
  • While graph learning algorithms did exist before, GNNs are unique as they employ the infrastructure and knowledge that has been built up around deep learning.

So, why the focus on graphs in the first place? Graphs have two key elements:

  • “Nodes” which are the various different data points.
  • “Edges” which are the relationships between those nodes.

These are a powerful tool in representing real-world phenomena. That could be social interactions, molecular interactions, or even a map of the internet.

Moreover, a lot of information can be thought of as special kinds of graphs. A digital photo, for instance, is just a grid in which all of the individual pixels are connected to their adjacent neighbours. And every sentence is effectively a graph, with preceding and successive words as neighbouring points.

These are rather simple examples, but the point is that graphs can be everywhere – and they are often incredibly complex.

Think about your last trip to the grocery store: what did you buy? Which adverts did you see while you were there? How long were you there for? Which aisles did you browse? Which day of the week was it? Mapping all that information onto a graph and analysing it might be able to tell us some incredibly interesting things about how you shop – but doing so would also take countless hours.

GNNs, on the other hand, can carry out that kind of task far more quickly and efficiently. That’s because they excel at understanding the complex relationships that modern retail is built on, which could also make them incredibly useful for tackling some of the industry’s biggest questions.

How do GNNs work?

A GNN works by passing and aggregating “messages” between each of the nodes. This allows it to a) learn the attributes of each node, and b) understand how each of those nodes are related to one another. The powerful thing about this approach is that a GNN can start to learn and make predictions from raw data, without the need to undertake feature engineering.

Our own research into the capabilities of GNNs is revealing a number of areas where they can play a pivotal role in providing vital customer data for the retail sector.

For example, they can help in the creation of unified prediction models, ones capable of accurately forecasting whether a customer is likely to buy a product at a specific time. By providing a GNN with all the information we can about a customer’s prior purchasing behaviours – and the attributes of the item in question – a GNN could ultimately start to make some very smart predictions about their likelihood to purchase.

As you can imagine, this capability has many potential applications across multiple verticals. It is particularly well suited to the kind of “cold start” problems that go hand-in-hand with new customer sign-ups, where – without any information on an individual and their preferences – it is extremely difficult to recommend any products to them.

Why isn’t everyone already using GNNs?

Since GNNs seem to present such a clear opportunity, the logical question that follows is why they’re not being adopted as quickly as Large Language Models like ChatGPT. Part of the answer lies in the fact that there is currently very little standardisation when it comes to GNN algorithms, meaning there are also very few resources for most data scientists to draw on.

From a technological standpoint, there are three main areas of focus when it comes to the development of a GNN:

  • Graph pre-processing – which allows a graph to be built from raw data.
  • GNN blocks – which handle message passing and the generation of embeddings
  • Storage and retrieval – specifically, the storage and retrieval of those embeddings.

Today, all three of these areas' present significant challenges. The way in which we store and sample an underlying graph can have a huge impact on the performance of an algorithm, for instance. The implementation of different architectures can also vary significantly, making it hard to iterate from one to another. Traditional databases are also very inefficient when it comes to even simple graph-related queries. Because of this, spinning up a GNN can quickly escalate from being a fairly simple coding challenge to one that requires a vast amount of resource to get right. Inevitably, that makes the current environment a very slippery slope on which to work.

Given the fast-paced nature of this community, it’s going to be very interesting to watch how this space evolves. It could very well be ruled by some big models like LLM’s which can be used by all and are owned by a few, or by service providers which solve   these problems, or it becomes purely open-source - or some combination of all three.

We think it is too soon to comment on what will dominate the GNN space. But, then again, it is our job to be on lookout for change. Let us know in the comments how you feel about the evolution of GNNs, or you think it is some hype that will ultimately die down.

The latest insights from our experts around the world

customer first data science analytics & machine learning services
Ready to get started?

Speak to a member of our team for more information

Contact us