A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://pytorch-geometric.readthedocs.io/en/latest/advanced/compile.html below:

Website Navigation


Compiled Graph Neural Networks — pytorch_geometric documentation

Compiled Graph Neural Networks

torch.compile() is the latest method to speed up your PyTorch code in torch >= 2.0.0! torch.compile() makes PyTorch code run faster by JIT-compiling it into optimized kernels, all while required minimal code changes.

Under the hood, torch.compile() captures PyTorch programs via TorchDynamo, canonicalizes over 2,000 PyTorch operators via PrimTorch, and finally generates fast code out of it across multiple accelerators and backends via the deep learning compiler TorchInductor.

Note

See here for a general tutorial on how to leverage torch.compile(), and here for a description of its interface.

In this tutorial, we show how to optimize your custom PyG model via torch.compile().

Note

From PyG 2.5 (and onwards), torch.compile() is now fully compatible with all PyG GNN layers. If you are on an earlier version of PyG, consider using torch_geometric.compile() instead.

Basic Usage

Once you have a PyG model defined, simply wrap it with torch.compile() to obtain its optimized version:

import torch
from torch_geometric.nn import GraphSAGE

model = GraphSAGE(in_channels, hidden_channels, num_layers, out_channels)
model = model.to(device)

model = torch.compile(model)

and execute it as usual:

from torch_geometric.datasets import Planetoid

dataset = Planetoid(root, name="Cora")
data = dataset[0].to(device)

out = model(data.x, data.edge_index)
Maximizing Performance

The torch.compile() method provides two important arguments to be aware of:

Example Scripts

We have incorporated multiple examples in examples/compile that further show the practical usage of torch.compile():

  1. Node Classification via GCN (dynamic=False)

  2. Graph Classification via GIN (dynamic=True)

If you notice that torch.compile() fails for a certain PyG model, do not hesitate to reach out either on GitHub or Slack. We are very eager to improve torch.compile() support across the whole PyG code base.

Benchmark

torch.compile() works fantastically well for many PyG models. Overall, we observe runtime improvements of up to 300%.

Specifically, we benchmark GCN, GraphSAGE and GIN and compare runtimes obtained from traditional eager mode and torch.compile(). We use a synthetic graph with 10,000 nodes and 200,000 edges, and a hidden feature dimensionality of 64. We report runtimes over 500 optimization steps:

Model

Mode

Forward

Backward

Total

Speedup

GCN

Eager

2.6396s

2.1697s

4.8093s

GCN

Compiled

1.1082s

0.5896s

1.6978s

2.83x

GraphSAGE

Eager

1.6023s

1.6428s

3.2451s

GraphSAGE

Compiled

0.7033s

0.7465s

1.4498s

2.24x

GIN

Eager

1.6701s

1.6990s

3.3690s

GIN

Compiled

0.7320s

0.7407s

1.4727s

2.29x

To reproduce these results, run

python test/nn/models/test_basic_gnn.py

from the root folder of your checked out PyG repository from GitHub.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4