Reduces all values from the src
tensor at the indices specified in the index
tensor along a given dimension dim
.
Returns the indices that sort the tensor src
along a given dimension in ascending order by value.
Concatenates the given sequence of tensors tensors
in the given dimension dim
.
Reduces all values in the first dimension of the src
tensor within the ranges specified in the ptr
.
Sorts the elements of the inputs
tensor in ascending order.
Returns the cumulative sum of elements of x
.
Computes the (unweighted) degree of a given one-dimensional index tensor.
Computes a sparsely evaluated softmax.
Performs an indirect stable sort using a sequence of keys.
Row-wise sorts edge_index
.
Row-wise sorts edge_index
and removes its duplicated entries.
Returns True
if the graph given by edge_index
is undirected.
Converts the graph given by edge_index
to an undirected graph such that \((j,i) \in \mathcal{E}\) for every edge \((i,j) \in \mathcal{E}\).
Returns True
if the graph given by edge_index
contains self-loops.
Removes every self-loop in the graph given by edge_index
, so that \((i,i) \not\in \mathcal{E}\) for every \(i \in \mathcal{V}\).
Segregates self-loops from the graph.
Adds a self-loop \((i,i) \in \mathcal{E}\) to every node \(i \in \mathcal{V}\) in the graph given by edge_index
.
Adds remaining self-loop \((i,i) \in \mathcal{E}\) to every node \(i \in \mathcal{V}\) in the graph given by edge_index
.
Returns the edge features or weights of self-loops \((i, i)\) of every node \(i \in \mathcal{V}\) in the graph given by edge_index
.
Returns True
if the graph given by edge_index
contains isolated nodes.
Removes the isolated nodes from the graph given by edge_index
with optional edge attributes edge_attr
.
Returns the number of hops the model is aggregating information from.
Returns the induced subgraph of (edge_index, edge_attr)
containing the nodes in subset
.
Returns the induced subgraph of the bipartite graph (edge_index, edge_attr)
containing the nodes in subset
.
Computes the induced subgraph of edge_index
around all nodes in node_idx
reachable within \(k\) hops.
Randomly drops nodes from the adjacency matrix edge_index
with probability p
using samples from a Bernoulli distribution.
Randomly drops edges from the adjacency matrix edge_index
with probability p
using samples from a Bernoulli distribution.
Drops edges from the adjacency matrix edge_index
based on random walks.
Randomly drops edges from the adjacency matrix (edge_index, edge_attr)
with probability p
using samples from a Bernoulli distribution.
The homophily of a graph characterizes how likely nodes with the same label are near each other in a graph.
The degree assortativity coefficient from the "Mixing patterns in networks" paper.
Applies normalization to the edges of a graph.
Computes the graph Laplacian of the graph given by edge_index
and optional edge_weight
.
Computes the mesh Laplacian of a mesh given by pos
and face
.
Returns a new tensor which masks the src
tensor along the dimension dim
according to the boolean mask mask
.
Converts indices to a mask representation.
Converts a mask to an index representation.
Selects the input tensor or input list according to a given index or mask vector.
Narrows the input tensor or input list to the specified range.
Given a sparse batch of node features \(\mathbf{X} \in \mathbb{R}^{(N_1 + \ldots + N_B) \times F}\) (with \(N_i\) indicating the number of nodes in graph \(i\)), creates a dense node feature tensor \(\mathbf{X} \in \mathbb{R}^{B \times N_{\max} \times F}\) (with \(N_{\max} = \max_i^B N_i\)).
Converts batched sparse adjacency matrices given by edge indices and edge attributes to a single dense batched adjacency matrix.
Given a contiguous batch of tensors \(\mathbf{X} \in \mathbb{R}^{(N_1 + \ldots + N_B) \times *}\) (with \(N_i\) indicating the number of elements in example \(i\)), creates a nested PyTorch tensor.
Given a nested PyTorch tensor, creates a contiguous batch of tensors \(\mathbf{X} \in \mathbb{R}^{(N_1 + \ldots + N_B) \times *}\), and optionally a batch vector which assigns each element to a specific example.
Converts a dense adjacency matrix to a sparse adjacency matrix defined by edge indices and edge attributes.
Returns True
if the input src
is a torch.sparse.Tensor
(in any sparse layout).
Returns True
if the input src
is of type torch.sparse.Tensor
(in any sparse layout) or of type torch_sparse.SparseTensor
.
Converts a sparse adjacency matrix defined by edge indices and edge attributes to a torch.sparse.Tensor
with layout torch.sparse_coo.
Converts a sparse adjacency matrix defined by edge indices and edge attributes to a torch.sparse.Tensor
with layout torch.sparse_csr.
Converts a sparse adjacency matrix defined by edge indices and edge attributes to a torch.sparse.Tensor
with layout torch.sparse_csc.
Converts a sparse adjacency matrix defined by edge indices and edge attributes to a torch.sparse.Tensor
with custom layout
.
Converts a torch.sparse.Tensor
or a torch_sparse.SparseTensor
to edge indices and edge attributes.
Matrix product of sparse matrix with dense matrix.
Splits src
according to a batch
vector along dimension dim
.
Splits the edge_index
according to a batch
vector.
Taskes a one-dimensional index
tensor and returns a one-hot encoded representation of it with shape [*, num_classes]
that has zeros everywhere except where the index of last dimension matches the corresponding value of the input tensor, in which case it will be 1
.
Computes the normalized cut \(\mathbf{e}_{i,j} \cdot \left( \frac{1}{\deg(i)} + \frac{1}{\deg(j)} \right)\) of a weighted graph given by edge indices and edge attributes.
Returns the edge indices of a two-dimensional grid graph with height height
and width width
and its node positions.
Computes (normalized) geodesic distances of a mesh given by pos
and face
.
Converts a graph given by edge indices and edge attributes to a scipy sparse matrix.
Converts a scipy sparse matrix to edge indices and edge attributes.
Converts a torch_geometric.data.Data
instance to a networkx.Graph
if to_undirected
is set to True
, or a directed networkx.DiGraph
otherwise.
Converts a networkx.Graph
or networkx.DiGraph
to a torch_geometric.data.Data
instance.
Converts a (edge_index, edge_weight)
tuple to a networkit.Graph
.
Converts a networkit.Graph
to a (edge_index, edge_weight)
tuple.
Converts a torch_geometric.data.Data
instance to a trimesh.Trimesh
.
Converts a trimesh.Trimesh
to a torch_geometric.data.Data
instance.
Converts a graph given by edge_index
and optional edge_weight
into a cugraph
graph object.
Converts a cugraph
graph object into edge_index
and optional edge_weight
tensors.
Converts a torch_geometric.data.Data
or torch_geometric.data.HeteroData
instance to a dgl
graph object.
Converts a dgl
graph object to a torch_geometric.data.Data
or torch_geometric.data.HeteroData
instance.
Converts a rdkit.Chem.Mol
instance to a torch_geometric.data.Data
instance.
Converts a torch_geometric.data.Data
instance to a rdkit.Chem.Mol
instance.
Converts a SMILES string to a torch_geometric.data.Data
instance.
Converts a torch_geometric.data.Data
instance to a SMILES string.
Returns the edge_index
of a random Erdos-Renyi graph.
Returns the edge_index
of a stochastic blockmodel graph.
Returns the edge_index
of a Barabasi-Albert preferential attachment model, where a graph of num_nodes
nodes grows by attaching new nodes with num_edges
edges that are preferentially attached to existing nodes with high degree.
Samples random negative edges of a graph given by edge_index
.
Samples random negative edges of multiple graphs given by edge_index
and batch
.
Samples a negative edge (i,k)
for every positive edge (i,j)
in the graph given by edge_index
, and returns it as a tuple of the form (i,j,k)
.
Randomly shuffle the feature matrix x
along the first dimension.
Randomly masks feature from the feature matrix x
with probability p
using samples from a Bernoulli distribution.
Randomly adds edges to edge_index
.
The tree decomposition algorithm of molecules from the "Junction Tree Variational Autoencoder for Molecular Graph Generation" paper.
Returns the output embeddings of all MessagePassing
layers in model
.
Returns the output embeddings of all MessagePassing
layers in a heterogeneous model
, organized by edge type.
Trims the edge_index
representation, node features x
and edge features edge_attr
to a minimal-sized representation for the current GNN layer layer
in directed NeighborLoader
scenarios.
Calculates the personalized PageRank (PPR) vector for all or a subset of nodes using a variant of the Andersen algorithm.
Splits the edges of a torch_geometric.data.Data
object into positive and negative train/val/test edges.
Compute Jacobian‑based influence aggregates for multiple seed nodes, as introduced in the "Towards Quantifying Long-Range Interactions in Graph Machine Learning: a Large Graph Dataset and a Measurement" paper.
Utility package.
Reduces all values from the src
tensor at the indices specified in the index
tensor along a given dimension dim
. See the documentation # noqa: E501 of the torch_scatter
package for more information.
src (torch.Tensor) – The source tensor.
index (torch.Tensor) – The index tensor.
dim (int, optional) – The dimension along which to index. (default: 0
)
dim_size (int, optional) – The size of the output tensor at dimension dim
. If set to None
, will create a minimal-sized output tensor according to index.max() + 1
. (default: None
)
reduce (str, optional) – The reduce operation ("sum"
, "mean"
, "mul"
, "min"
, "max"
or "any"
). (default: "sum"
)
Returns the indices that sort the tensor src
along a given dimension in ascending order by value. In contrast to torch.argsort()
, sorting is performed in groups according to the values in index
.
src (torch.Tensor) – The source tensor.
index (torch.Tensor) – The index tensor.
dim (int, optional) – The dimension along which to index. (default: 0
)
num_groups (int, optional) – The number of groups. (default: None
)
descending (bool, optional) – Controls the sorting order (ascending or descending). (default: False
)
return_consecutive (bool, optional) – If set to True
, will not offset the output to start from 0
for each group. (default: False
)
stable (bool, optional) – Controls the relative order of equivalent elements. (default: False
)
Example
>>> src = torch.tensor([0, 1, 5, 4, 3, 2, 6, 7, 8]) >>> index = torch.tensor([0, 0, 1, 1, 1, 1, 2, 2, 2]) >>> group_argsort(src, index) tensor([0, 1, 3, 2, 1, 0, 0, 1, 2])
Concatenates the given sequence of tensors tensors
in the given dimension dim
. Different from torch.cat()
, values along the concatenating dimension are grouped according to the indices defined in the index
tensors. All tensors must have the same shape (except in the concatenating dimension).
Example
>>> x1 = torch.tensor([[0.2716, 0.4233], ... [0.3166, 0.0142], ... [0.2371, 0.3839], ... [0.4100, 0.0012]]) >>> x2 = torch.tensor([[0.3752, 0.5782], ... [0.7757, 0.5999]]) >>> index1 = torch.tensor([0, 0, 1, 2]) >>> index2 = torch.tensor([0, 2]) >>> scatter_concat([x1,x2], [index1, index2], dim=0) tensor([[0.2716, 0.4233], [0.3166, 0.0142], [0.3752, 0.5782], [0.2371, 0.3839], [0.4100, 0.0012], [0.7757, 0.5999]])
Reduces all values in the first dimension of the src
tensor within the ranges specified in the ptr
. See the documentation of the torch_scatter
package for more information.
src (torch.Tensor) – The source tensor.
ptr (torch.Tensor) – A monotonically increasing pointer tensor that refers to the boundaries of segments such that ptr[0] = 0
and ptr[-1] = src.size(0)
.
reduce (str, optional) – The reduce operation ("sum"
, "mean"
, "min"
or "max"
). (default: "sum"
)
Sorts the elements of the inputs
tensor in ascending order. It is expected that inputs
is one-dimensional and that it only contains positive integer values. If max_value
is given, it can be used by the underlying algorithm for better performance.
inputs (torch.Tensor) – A vector with positive integer values.
max_value (int, optional) – The maximum value stored inside inputs
. This value can be an estimation, but needs to be greater than or equal to the real maximum. (default: None
)
stable (bool, optional) – Makes the sorting routine stable, which guarantees that the order of equivalent elements is preserved. (default: False
)
Returns the cumulative sum of elements of x
. In contrast to torch.cumsum()
, prepends the output with zero.
x (torch.Tensor) – The input tensor.
dim (int, optional) – The dimension to do the operation over. (default: 0
)
Example
>>> x = torch.tensor([2, 4, 1]) >>> cumsum(x) tensor([0, 2, 6, 7])
Computes the (unweighted) degree of a given one-dimensional index tensor.
index (LongTensor) – Index tensor.
num_nodes (int, optional) – The number of nodes, i.e. max_val + 1
of index
. (default: None
)
dtype (torch.dtype
, optional) – The desired data type of the returned tensor.
Tensor
Example
>>> row = torch.tensor([0, 1, 0, 2, 0]) >>> degree(row, dtype=torch.long) tensor([3, 1, 1])
Computes a sparsely evaluated softmax. Given a value tensor src
, this function first groups the values along the first dimension based on the indices specified in index
, and then proceeds to compute the softmax individually for each group.
src (Tensor) – The source tensor.
index (LongTensor, optional) – The indices of elements for applying the softmax. (default: None
)
ptr (LongTensor, optional) – If given, computes the softmax based on sorted inputs in CSR representation. (default: None
)
num_nodes (int, optional) – The number of nodes, i.e. max_val + 1
of index
. (default: None
)
dim (int, optional) – The dimension in which to normalize. (default: 0
)
Tensor
Examples
>>> src = torch.tensor([1., 1., 1., 1.]) >>> index = torch.tensor([0, 0, 1, 2]) >>> ptr = torch.tensor([0, 2, 3, 4]) >>> softmax(src, index) tensor([0.5000, 0.5000, 1.0000, 1.0000])
>>> softmax(src, None, ptr) tensor([0.5000, 0.5000, 1.0000, 1.0000])
>>> src = torch.randn(4, 4) >>> ptr = torch.tensor([0, 4]) >>> softmax(src, index, dim=-1) tensor([[0.7404, 0.2596, 1.0000, 1.0000], [0.1702, 0.8298, 1.0000, 1.0000], [0.7607, 0.2393, 1.0000, 1.0000], [0.8062, 0.1938, 1.0000, 1.0000]])
Performs an indirect stable sort using a sequence of keys.
Given multiple sorting keys, returns an array of integer indices that describe their sort order. The last key in the sequence is used for the primary sort order, the second-to-last key for the secondary sort order, and so on.
keys ([torch.Tensor]) – The \(k\) different columns to be sorted. The last key is the primary sort key.
dim (int, optional) – The dimension to sort along. (default: -1
)
descending (bool, optional) – Controls the sorting order (ascending or descending). (default: False
)
Row-wise sorts edge_index
.
edge_index (torch.Tensor) – The edge indices.
edge_attr (torch.Tensor or List[torch.Tensor], optional) – Edge weights or multi-dimensional edge features. If given as a list, will re-shuffle and remove duplicates for all its entries. (default: None
)
num_nodes (int, optional) – The number of nodes, i.e. max_val + 1
of edge_index
. (default: None
)
sort_by_row (bool, optional) – If set to False
, will sort edge_index
column-wise/by destination node. (default: True
)
LongTensor
if edge_attr
is not passed, else (LongTensor
, Optional[Tensor]
or List[Tensor]]
)
Warning
From PyG >= 2.3.0 onwards, this function will always return a tuple whenever edge_attr
is passed as an argument (even in case it is set to None
).
Examples
>>> edge_index = torch.tensor([[2, 1, 1, 0], [1, 2, 0, 1]]) >>> edge_attr = torch.tensor([[1], [2], [3], [4]]) >>> sort_edge_index(edge_index) tensor([[0, 1, 1, 2], [1, 0, 2, 1]])
>>> sort_edge_index(edge_index, edge_attr) (tensor([[0, 1, 1, 2], [1, 0, 2, 1]]), tensor([[4], [3], [2], [1]]))
Row-wise sorts edge_index
and removes its duplicated entries. Duplicate entries in edge_attr
are merged by scattering them together according to the given reduce
option.
edge_index (torch.Tensor) – The edge indices.
edge_attr (torch.Tensor or List[torch.Tensor], optional) – Edge weights or multi-dimensional edge features. If given as a list, will re-shuffle and remove duplicates for all its entries. (default: None
)
num_nodes (int, optional) – The number of nodes, i.e. max_val + 1
of edge_index
. (default: None
)
reduce (str, optional) – The reduce operation to use for merging edge features ("sum"
, "mean"
, "min"
, "max"
, "mul"
, "any"
). (default: "sum"
)
is_sorted (bool, optional) – If set to True
, will expect edge_index
to be already sorted row-wise.
sort_by_row (bool, optional) – If set to False
, will sort edge_index
column-wise.
LongTensor
if edge_attr
is not passed, else (LongTensor
, Optional[Tensor]
or List[Tensor]]
)
Warning
From PyG >= 2.3.0 onwards, this function will always return a tuple whenever edge_attr
is passed as an argument (even in case it is set to None
).
Example
>>> edge_index = torch.tensor([[1, 1, 2, 3], ... [3, 3, 1, 2]]) >>> edge_attr = torch.tensor([1., 1., 1., 1.]) >>> coalesce(edge_index) tensor([[1, 2, 3], [3, 1, 2]])
>>> # Sort `edge_index` column-wise >>> coalesce(edge_index, sort_by_row=False) tensor([[2, 3, 1], [1, 2, 3]])
>>> coalesce(edge_index, edge_attr) (tensor([[1, 2, 3], [3, 1, 2]]), tensor([2., 1., 1.]))
>>> # Use 'mean' operation to merge edge features >>> coalesce(edge_index, edge_attr, reduce='mean') (tensor([[1, 2, 3], [3, 1, 2]]), tensor([1., 1., 1.]))
Returns True
if the graph given by edge_index
is undirected.
edge_index (LongTensor) – The edge indices.
edge_attr (Tensor or List[Tensor], optional) – Edge weights or multi- dimensional edge features. If given as a list, will check for equivalence in all its entries. (default: None
)
num_nodes (int, optional) – The number of nodes, i.e. max(edge_index) + 1
. (default: None
)
Examples
>>> edge_index = torch.tensor([[0, 1, 0], ... [1, 0, 0]]) >>> weight = torch.tensor([0, 0, 1]) >>> is_undirected(edge_index, weight) True
>>> weight = torch.tensor([0, 1, 1]) >>> is_undirected(edge_index, weight) False
Converts the graph given by edge_index
to an undirected graph such that \((j,i) \in \mathcal{E}\) for every edge \((i,j) \in \mathcal{E}\).
edge_index (LongTensor) – The edge indices.
edge_attr (Tensor or List[Tensor], optional) – Edge weights or multi- dimensional edge features. If given as a list, will remove duplicates for all its entries. (default: None
)
num_nodes (int, optional) – The number of nodes, i.e. max(edge_index) + 1
. (default: None
)
reduce (str, optional) – The reduce operation to use for merging edge features ("add"
, "mean"
, "min"
, "max"
, "mul"
). (default: "add"
)
LongTensor
if edge_attr
is not passed, else (LongTensor
, Optional[Tensor]
or List[Tensor]]
)
Warning
From PyG >= 2.3.0 onwards, this function will always return a tuple whenever edge_attr
is passed as an argument (even in case it is set to None
).
Examples
>>> edge_index = torch.tensor([[0, 1, 1], ... [1, 0, 2]]) >>> to_undirected(edge_index) tensor([[0, 1, 1, 2], [1, 0, 2, 1]])
>>> edge_index = torch.tensor([[0, 1, 1], ... [1, 0, 2]]) >>> edge_weight = torch.tensor([1., 1., 1.]) >>> to_undirected(edge_index, edge_weight) (tensor([[0, 1, 1, 2], [1, 0, 2, 1]]), tensor([2., 2., 1., 1.]))
>>> # Use 'mean' operation to merge edge features >>> to_undirected(edge_index, edge_weight, reduce='mean') (tensor([[0, 1, 1, 2], [1, 0, 2, 1]]), tensor([1., 1., 1., 1.]))
Returns True
if the graph given by edge_index
contains self-loops.
edge_index (LongTensor) – The edge indices.
Examples
>>> edge_index = torch.tensor([[0, 1, 0], ... [1, 0, 0]]) >>> contains_self_loops(edge_index) True
>>> edge_index = torch.tensor([[0, 1, 1], ... [1, 0, 2]]) >>> contains_self_loops(edge_index) False
Removes every self-loop in the graph given by edge_index
, so that \((i,i) \not\in \mathcal{E}\) for every \(i \in \mathcal{V}\).
edge_index (LongTensor) – The edge indices.
edge_attr (Tensor, optional) – Edge weights or multi-dimensional edge features. (default: None
)
(LongTensor
, Tensor
)
Example
>>> edge_index = torch.tensor([[0, 1, 0], ... [1, 0, 0]]) >>> edge_attr = [[1, 2], [3, 4], [5, 6]] >>> edge_attr = torch.tensor(edge_attr) >>> remove_self_loops(edge_index, edge_attr) (tensor([[0, 1], [1, 0]]), tensor([[1, 2], [3, 4]]))
Segregates self-loops from the graph.
edge_index (LongTensor) – The edge indices.
edge_attr (Tensor, optional) – Edge weights or multi-dimensional edge features. (default: None
)
(LongTensor
, Tensor
, LongTensor
, Tensor
)
Example
>>> edge_index = torch.tensor([[0, 0, 1], ... [0, 1, 0]]) >>> (edge_index, edge_attr, ... loop_edge_index, ... loop_edge_attr) = segregate_self_loops(edge_index) >>> loop_edge_index tensor([[0], [0]])
Adds a self-loop \((i,i) \in \mathcal{E}\) to every node \(i \in \mathcal{V}\) in the graph given by edge_index
. In case the graph is weighted or has multi-dimensional edge features (edge_attr != None
), edge features of self-loops will be added according to fill_value
.
edge_index (LongTensor) – The edge indices.
edge_attr (Tensor, optional) – Edge weights or multi-dimensional edge features. (default: None
)
fill_value (float or Tensor or str, optional) – The way to generate edge features of self-loops (in case edge_attr != None
). If given as float
or torch.Tensor
, edge features of self-loops will be directly given by fill_value
. If given as str
, edge features of self-loops are computed by aggregating all features of edges that point to the specific node, according to a reduce operation. ("add"
, "mean"
, "min"
, "max"
, "mul"
). (default: 1.
)
num_nodes (int or Tuple[int, int], optional) – The number of nodes, i.e. max_val + 1
of edge_index
. If given as a tuple, then edge_index
is interpreted as a bipartite graph with shape (num_src_nodes, num_dst_nodes)
. (default: None
)
(LongTensor
, Tensor
)
Examples
>>> edge_index = torch.tensor([[0, 1, 0], ... [1, 0, 0]]) >>> edge_weight = torch.tensor([0.5, 0.5, 0.5]) >>> add_self_loops(edge_index) (tensor([[0, 1, 0, 0, 1], [1, 0, 0, 0, 1]]), None)
>>> add_self_loops(edge_index, edge_weight) (tensor([[0, 1, 0, 0, 1], [1, 0, 0, 0, 1]]), tensor([0.5000, 0.5000, 0.5000, 1.0000, 1.0000]))
>>> # edge features of self-loops are filled by constant `2.0` >>> add_self_loops(edge_index, edge_weight, ... fill_value=2.) (tensor([[0, 1, 0, 0, 1], [1, 0, 0, 0, 1]]), tensor([0.5000, 0.5000, 0.5000, 2.0000, 2.0000]))
>>> # Use 'add' operation to merge edge features for self-loops >>> add_self_loops(edge_index, edge_weight, ... fill_value='add') (tensor([[0, 1, 0, 0, 1], [1, 0, 0, 0, 1]]), tensor([0.5000, 0.5000, 0.5000, 1.0000, 0.5000]))
Adds remaining self-loop \((i,i) \in \mathcal{E}\) to every node \(i \in \mathcal{V}\) in the graph given by edge_index
. In case the graph is weighted or has multi-dimensional edge features (edge_attr != None
), edge features of non-existing self-loops will be added according to fill_value
.
edge_index (LongTensor) – The edge indices.
edge_attr (Tensor, optional) – Edge weights or multi-dimensional edge features. (default: None
)
fill_value (float or Tensor or str, optional) – The way to generate edge features of self-loops (in case edge_attr != None
). If given as float
or torch.Tensor
, edge features of self-loops will be directly given by fill_value
. If given as str
, edge features of self-loops are computed by aggregating all features of edges that point to the specific node, according to a reduce operation. ("add"
, "mean"
, "min"
, "max"
, "mul"
). (default: 1.
)
num_nodes (int, optional) – The number of nodes, i.e. max_val + 1
of edge_index
. (default: None
)
(LongTensor
, Tensor
)
Example
>>> edge_index = torch.tensor([[0, 1], ... [1, 0]]) >>> edge_weight = torch.tensor([0.5, 0.5]) >>> add_remaining_self_loops(edge_index, edge_weight) (tensor([[0, 1, 0, 1], [1, 0, 0, 1]]), tensor([0.5000, 0.5000, 1.0000, 1.0000]))
Returns the edge features or weights of self-loops \((i, i)\) of every node \(i \in \mathcal{V}\) in the graph given by edge_index
. Edge features of missing self-loops not present in edge_index
will be filled with zeros. If edge_attr
is not given, it will be the vector of ones.
Note
This operation is analogous to getting the diagonal elements of the dense adjacency matrix.
Tensor
Examples
>>> edge_index = torch.tensor([[0, 1, 0], ... [1, 0, 0]]) >>> edge_weight = torch.tensor([0.2, 0.3, 0.5]) >>> get_self_loop_attr(edge_index, edge_weight) tensor([0.5000, 0.0000])
>>> get_self_loop_attr(edge_index, edge_weight, num_nodes=4) tensor([0.5000, 0.0000, 0.0000, 0.0000])
Returns True
if the graph given by edge_index
contains isolated nodes.
Examples
>>> edge_index = torch.tensor([[0, 1, 0], ... [1, 0, 0]]) >>> contains_isolated_nodes(edge_index) False
>>> contains_isolated_nodes(edge_index, num_nodes=3) True
Removes the isolated nodes from the graph given by edge_index
with optional edge attributes edge_attr
. In addition, returns a mask of shape [num_nodes]
to manually filter out isolated node features later on. Self-loops are preserved for non-isolated nodes.
(LongTensor, Tensor, BoolTensor)
Examples
>>> edge_index = torch.tensor([[0, 1, 0], ... [1, 0, 0]]) >>> edge_index, edge_attr, mask = remove_isolated_nodes(edge_index) >>> mask # node mask (2 nodes) tensor([True, True])
>>> edge_index, edge_attr, mask = remove_isolated_nodes(edge_index, ... num_nodes=3) >>> mask # node mask (3 nodes) tensor([True, True, False])
Returns the number of hops the model is aggregating information from.
Note
This function counts the number of message passing layers as an approximation of the total number of hops covered by the model. Its output may not necessarily be correct in case message passing layers perform multi-hop aggregation, e.g., as in ChebConv
.
Example
>>> class GNN(torch.nn.Module): ... def __init__(self): ... super().__init__() ... self.conv1 = GCNConv(3, 16) ... self.conv2 = GCNConv(16, 16) ... self.lin = Linear(16, 2) ... ... def forward(self, x, edge_index): ... x = self.conv1(x, edge_index).relu() ... x = self.conv2(x, edge_index).relu() ... return self.lin(x) >>> get_num_hops(GNN()) 2
Returns the induced subgraph of (edge_index, edge_attr)
containing the nodes in subset
.
subset (LongTensor, BoolTensor or [int]) – The nodes to keep.
edge_index (LongTensor) – The edge indices.
edge_attr (Tensor, optional) – Edge weights or multi-dimensional edge features. (default: None
)
relabel_nodes (bool, optional) – If set to True
, the resulting edge_index
will be relabeled to hold consecutive indices starting from zero. (default: False
)
num_nodes (int, optional) – The number of nodes, i.e. max(edge_index) + 1
. (default: None
)
return_edge_mask (bool, optional) – If set to True
, will return the edge mask to filter out additional edge features. (default: False
)
(LongTensor
, Tensor
)
Examples
>>> edge_index = torch.tensor([[0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6], ... [1, 0, 2, 1, 3, 2, 4, 3, 5, 4, 6, 5]]) >>> edge_attr = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]) >>> subset = torch.tensor([3, 4, 5]) >>> subgraph(subset, edge_index, edge_attr) (tensor([[3, 4, 4, 5], [4, 3, 5, 4]]), tensor([ 7., 8., 9., 10.]))
>>> subgraph(subset, edge_index, edge_attr, return_edge_mask=True) (tensor([[3, 4, 4, 5], [4, 3, 5, 4]]), tensor([ 7., 8., 9., 10.]), tensor([False, False, False, False, False, False, True, True, True, True, False, False]))
Returns the induced subgraph of the bipartite graph (edge_index, edge_attr)
containing the nodes in subset
.
subset (Tuple[Tensor, Tensor] or tuple([int],[int])) – The nodes to keep.
edge_index (LongTensor) – The edge indices.
edge_attr (Tensor, optional) – Edge weights or multi-dimensional edge features. (default: None
)
relabel_nodes (bool, optional) – If set to True
, the resulting edge_index
will be relabeled to hold consecutive indices starting from zero. (default: False
)
size (tuple, optional) – The number of nodes. (default: None
)
return_edge_mask (bool, optional) – If set to True
, will return the edge mask to filter out additional edge features. (default: False
)
(LongTensor
, Tensor
)
Examples
>>> edge_index = torch.tensor([[0, 5, 2, 3, 3, 4, 4, 3, 5, 5, 6], ... [0, 0, 3, 2, 0, 0, 2, 1, 2, 3, 1]]) >>> edge_attr = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) >>> subset = (torch.tensor([2, 3, 5]), torch.tensor([2, 3])) >>> bipartite_subgraph(subset, edge_index, edge_attr) (tensor([[2, 3, 5, 5], [3, 2, 2, 3]]), tensor([ 3, 4, 9, 10]))
>>> bipartite_subgraph(subset, edge_index, edge_attr, ... return_edge_mask=True) (tensor([[2, 3, 5, 5], [3, 2, 2, 3]]), tensor([ 3, 4, 9, 10]), tensor([False, False, True, True, False, False, False, False, True, True, False]))
Computes the induced subgraph of edge_index
around all nodes in node_idx
reachable within \(k\) hops.
The flow
argument denotes the direction of edges for finding \(k\)-hop neighbors. If set to "source_to_target"
, then the method will find all neighbors that point to the initial set of seed nodes in node_idx.
This mimics the natural flow of message passing in Graph Neural Networks.
The method returns (1) the nodes involved in the subgraph, (2) the filtered edge_index
connectivity, (3) the mapping from node indices in node_idx
to their new location, and (4) the edge mask indicating which edges were preserved.
node_idx (int, list, tuple or torch.Tensor
) – The central seed node(s).
num_hops (int) – The number of hops \(k\).
edge_index (LongTensor) – The edge indices.
relabel_nodes (bool, optional) – If set to True
, the resulting edge_index
will be relabeled to hold consecutive indices starting from zero. (default: False
)
num_nodes (int, optional) – The number of nodes, i.e. max_val + 1
of edge_index
. (default: None
)
flow (str, optional) – The flow direction of \(k\)-hop aggregation ("source_to_target"
or "target_to_source"
). (default: "source_to_target"
)
directed (bool, optional) – If set to True
, will only include directed edges to the seed nodes node_idx
. (default: False
)
(LongTensor
, LongTensor
, LongTensor
, BoolTensor
)
Examples
>>> edge_index = torch.tensor([[0, 1, 2, 3, 4, 5], ... [2, 2, 4, 4, 6, 6]])
>>> # Center node 6, 2-hops >>> subset, edge_index, mapping, edge_mask = k_hop_subgraph( ... 6, 2, edge_index, relabel_nodes=True) >>> subset tensor([2, 3, 4, 5, 6]) >>> edge_index tensor([[0, 1, 2, 3], [2, 2, 4, 4]]) >>> mapping tensor([4]) >>> edge_mask tensor([False, False, True, True, True, True]) >>> subset[mapping] tensor([6])
>>> edge_index = torch.tensor([[1, 2, 4, 5], ... [0, 1, 5, 6]]) >>> (subset, edge_index, ... mapping, edge_mask) = k_hop_subgraph([0, 6], 2, ... edge_index, ... relabel_nodes=True) >>> subset tensor([0, 1, 2, 4, 5, 6]) >>> edge_index tensor([[1, 2, 3, 4], [0, 1, 4, 5]]) >>> mapping tensor([0, 5]) >>> edge_mask tensor([True, True, True, True]) >>> subset[mapping] tensor([0, 6])
Randomly drops nodes from the adjacency matrix edge_index
with probability p
using samples from a Bernoulli distribution.
The method returns (1) the retained edge_index
, (2) the edge mask indicating which edges were retained. (3) the node mask indicating which nodes were retained.
edge_index (LongTensor) – The edge indices.
p (float, optional) – Dropout probability. (default: 0.5
)
num_nodes (int, optional) – The number of nodes, i.e. max_val + 1
of edge_index
. (default: None
)
training (bool, optional) – If set to False
, this operation is a no-op. (default: True
)
relabel_nodes (bool, optional) – If set to True, the resulting edge_index will be relabeled to hold consecutive indices starting from zero.
(LongTensor
, BoolTensor
, BoolTensor
)
Examples
>>> edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) >>> edge_index, edge_mask, node_mask = dropout_node(edge_index) >>> edge_index tensor([[0, 1], [1, 0]]) >>> edge_mask tensor([ True, True, False, False, False, False]) >>> node_mask tensor([ True, True, False, False])
Randomly drops edges from the adjacency matrix edge_index
with probability p
using samples from a Bernoulli distribution.
The method returns (1) the retained edge_index
, (2) the edge mask or index indicating which edges were retained, depending on the argument force_undirected
.
edge_index (LongTensor) – The edge indices.
p (float, optional) – Dropout probability. (default: 0.5
)
force_undirected (bool, optional) – If set to True
, will either drop or keep both edges of an undirected edge. (default: False
)
training (bool, optional) – If set to False
, this operation is a no-op. (default: True
)
(LongTensor
, BoolTensor
or LongTensor
)
Examples
>>> edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) >>> edge_index, edge_mask = dropout_edge(edge_index) >>> edge_index tensor([[0, 1, 2, 2], [1, 2, 1, 3]]) >>> edge_mask # masks indicating which edges are retained tensor([ True, False, True, True, True, False])
>>> edge_index, edge_id = dropout_edge(edge_index, ... force_undirected=True) >>> edge_index tensor([[0, 1, 2, 1, 2, 3], [1, 2, 3, 0, 1, 2]]) >>> edge_id # indices indicating which edges are retained tensor([0, 2, 4, 0, 2, 4])
Drops edges from the adjacency matrix edge_index
based on random walks. The source nodes to start random walks from are sampled from edge_index
with probability p
, following a Bernoulli distribution.
The method returns (1) the retained edge_index
, (2) the edge mask indicating which edges were retained.
edge_index (LongTensor) – The edge indices.
p (float, optional) – Sample probability. (default: 0.2
)
walks_per_node (int, optional) – The number of walks per node, same as Node2Vec
. (default: 1
)
walk_length (int, optional) – The walk length, same as Node2Vec
. (default: 3
)
num_nodes (int, optional) – The number of nodes, i.e. max_val + 1
of edge_index
. (default: None
)
is_sorted (bool, optional) – If set to True
, will expect edge_index
to be already sorted row-wise. (default: False
)
training (bool, optional) – If set to False
, this operation is a no-op. (default: True
)
(LongTensor
, BoolTensor
)
Example
>>> edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) >>> edge_index, edge_mask = dropout_path(edge_index) >>> edge_index tensor([[1, 2], [2, 3]]) >>> edge_mask # masks indicating which edges are retained tensor([False, False, True, False, True, False])
Randomly drops edges from the adjacency matrix (edge_index, edge_attr)
with probability p
using samples from a Bernoulli distribution.
edge_index (LongTensor) – The edge indices.
edge_attr (Tensor, optional) – Edge weights or multi-dimensional edge features. (default: None
)
p (float, optional) – Dropout probability. (default: 0.5
)
force_undirected (bool, optional) – If set to True
, will either drop or keep both edges of an undirected edge. (default: False
)
num_nodes (int, optional) – The number of nodes, i.e. max_val + 1
of edge_index
. (default: None
)
training (bool, optional) – If set to False
, this operation is a no-op. (default: True
)
Examples
>>> edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) >>> edge_attr = torch.tensor([1, 2, 3, 4, 5, 6]) >>> dropout_adj(edge_index, edge_attr) (tensor([[0, 1, 2, 3], [1, 2, 3, 2]]), tensor([1, 3, 5, 6]))
>>> # The returned graph is kept undirected >>> dropout_adj(edge_index, edge_attr, force_undirected=True) (tensor([[0, 1, 2, 1, 2, 3], [1, 2, 3, 0, 1, 2]]), tensor([1, 3, 5, 1, 3, 5]))
The homophily of a graph characterizes how likely nodes with the same label are near each other in a graph.
There are many measures of homophily that fits this definition. In particular:
In the “Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs” paper, the homophily is the fraction of edges in a graph which connects nodes that have the same class label:
\[\frac{| \{ (v,w) : (v,w) \in \mathcal{E} \wedge y_v = y_w \} | } {|\mathcal{E}|}\]
That measure is called the edge homophily ratio.
In the “Geom-GCN: Geometric Graph Convolutional Networks” paper, edge homophily is normalized across neighborhoods:
\[\frac{1}{|\mathcal{V}|} \sum_{v \in \mathcal{V}} \frac{ | \{ (w,v) : w \in \mathcal{N}(v) \wedge y_v = y_w \} | } { |\mathcal{N}(v)| }\]
That measure is called the node homophily ratio.
In the “Large-Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods” paper, edge homophily is modified to be insensitive to the number of classes and size of each class:
\[\frac{1}{C-1} \sum_{k=1}^{C} \max \left(0, h_k - \frac{|\mathcal{C}_k|} {|\mathcal{V}|} \right),\]
where \(C\) denotes the number of classes, \(|\mathcal{C}_k|\) denotes the number of nodes of class \(k\), and \(h_k\) denotes the edge homophily ratio of nodes of class \(k\).
Thus, that measure is called the class insensitive edge homophily ratio.
edge_index (Tensor or SparseTensor) – The graph connectivity.
y (Tensor) – The labels.
batch (LongTensor, optional) – Batch vector\(\mathbf{b} \in {\{ 0, \ldots,B-1\}}^N\), which assigns each node to a specific example. (default: None
)
method (str, optional) – The method used to calculate the homophily, either "edge"
(first formula), "node"
(second formula) or "edge_insensitive"
(third formula). (default: "edge"
)
Examples
>>> edge_index = torch.tensor([[0, 1, 2, 3], ... [1, 2, 0, 4]]) >>> y = torch.tensor([0, 0, 0, 0, 1]) >>> # Edge homophily ratio >>> homophily(edge_index, y, method='edge') 0.75
>>> # Node homophily ratio >>> homophily(edge_index, y, method='node') 0.6000000238418579
>>> # Class insensitive edge homophily ratio >>> homophily(edge_index, y, method='edge_insensitive') 0.19999998807907104
The degree assortativity coefficient from the “Mixing patterns in networks” paper. Assortativity in a network refers to the tendency of nodes to connect with other similar nodes over dissimilar nodes. It is computed from Pearson correlation coefficient of the node degrees.
edge_index (Tensor or SparseTensor) – The graph connectivity.
float
– The value of the degree assortativity coefficient for the input graph \(\in [-1, 1]\)
Example
>>> edge_index = torch.tensor([[0, 1, 2, 3, 2], ... [1, 2, 0, 1, 3]]) >>> assortativity(edge_index) -0.666667640209198
Applies normalization to the edges of a graph.
This function can add self-loops to the graph and apply either symmetric or asymmetric normalization based on the node degrees.
edge_index (LongTensor) – The edge indices.
num_nodes (int, int], optional) – The number of nodes, i.e. max_val + 1
of edge_index
. (default: None
)
add_self_loops (bool, optional) – If set to False
, will not add self-loops to the input graph. (default: True
)
symmetric (bool, optional) – If set to True
, symmetric normalization (\(D^{-1/2} A D^{-1/2}\)) is used, otherwise asymmetric normalization (\(D^{-1} A\)).
Computes the graph Laplacian of the graph given by edge_index
and optional edge_weight
.
edge_index (LongTensor) – The edge indices.
edge_weight (Tensor, optional) – One-dimensional edge weights. (default: None
)
normalization (str, optional) –
The normalization scheme for the graph Laplacian (default: None
):
1. None
: No normalization \(\mathbf{L} = \mathbf{D} - \mathbf{A}\)
2. "sym"
: Symmetric normalization \(\mathbf{L} = \mathbf{I} - \mathbf{D}^{-1/2} \mathbf{A} \mathbf{D}^{-1/2}\)
3. "rw"
: Random-walk normalization \(\mathbf{L} = \mathbf{I} - \mathbf{D}^{-1} \mathbf{A}\)
dtype (torch.dtype, optional) – The desired data type of returned tensor in case edge_weight=None
. (default: None
)
num_nodes (int, optional) – The number of nodes, i.e. max_val + 1
of edge_index
. (default: None
)
Examples
>>> edge_index = torch.tensor([[0, 1, 1, 2], ... [1, 0, 2, 1]]) >>> edge_weight = torch.tensor([1., 2., 2., 4.])
>>> # No normalization >>> lap = get_laplacian(edge_index, edge_weight)
>>> # Symmetric normalization >>> lap_sym = get_laplacian(edge_index, edge_weight, normalization='sym')
>>> # Random-walk normalization >>> lap_rw = get_laplacian(edge_index, edge_weight, normalization='rw')
Computes the mesh Laplacian of a mesh given by pos
and face
.
Computation is based on the cotangent matrix defined as
\[\begin{split}\mathbf{C}_{ij} = \begin{cases} \frac{\cot \angle_{ikj}~+\cot \angle_{ilj}}{2} & \text{if } i, j \text{ is an edge} \\ -\sum_{j \in N(i)}{C_{ij}} & \text{if } i \text{ is in the diagonal} \\ 0 & \text{otherwise} \end{cases}\end{split}\]
Normalization depends on the mass matrix defined as
\[\begin{split}\mathbf{M}_{ij} = \begin{cases} a(i) & \text{if } i \text{ is in the diagonal} \\ 0 & \text{otherwise} \end{cases}\end{split}\]
where \(a(i)\) is obtained by joining the barycenters of the triangles around vertex \(i\).
pos (Tensor) – The node positions.
face (LongTensor) – The face indices.
normalization (str, optional) –
The normalization scheme for the mesh Laplacian (default: None
):
1. None
: No normalization \(\mathbf{L} = \mathbf{C}\)
2. "sym"
: Symmetric normalization \(\mathbf{L} = \mathbf{M}^{-1/2} \mathbf{C}\mathbf{M}^{-1/2}\)
3. "rw"
: Row-wise normalization \(\mathbf{L} = \mathbf{M}^{-1} \mathbf{C}\)
Returns a new tensor which masks the src
tensor along the dimension dim
according to the boolean mask mask
.
src (torch.Tensor) – The input tensor.
dim (int) – The dimension in which to mask.
mask (torch.BoolTensor) – The 1-D tensor containing the binary mask to index with.
Converts indices to a mask representation.
Example
>>> index = torch.tensor([1, 3, 5]) >>> index_to_mask(index) tensor([False, True, False, True, False, True])
>>> index_to_mask(index, size=7) tensor([False, True, False, True, False, True, False])
Converts a mask to an index representation.
mask (Tensor) – The mask.
Example
>>> mask = torch.tensor([False, True, False]) >>> mask_to_index(mask) tensor([1])
Selects the input tensor or input list according to a given index or mask vector.
src (torch.Tensor or list) – The input tensor or list.
index_or_mask (torch.Tensor) – The index or mask vector.
dim (int) – The dimension along which to select.
Narrows the input tensor or input list to the specified range.
Given a sparse batch of node features \(\mathbf{X} \in \mathbb{R}^{(N_1 + \ldots + N_B) \times F}\) (with \(N_i\) indicating the number of nodes in graph \(i\)), creates a dense node feature tensor \(\mathbf{X} \in \mathbb{R}^{B \times N_{\max} \times F}\) (with \(N_{\max} = \max_i^B N_i\)). In addition, a mask of shape \(\mathbf{M} \in \{ 0, 1 \}^{B \times N_{\max}}\) is returned, holding information about the existence of fake-nodes in the dense representation.
x (Tensor) – Node feature matrix \(\mathbf{X} \in \mathbb{R}^{(N_1 + \ldots + N_B) \times F}\).
batch (LongTensor, optional) – Batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each node to a specific example. Must be ordered. (default: None
)
fill_value (float, optional) – The value for invalid entries in the resulting dense output tensor. (default: 0
)
max_num_nodes (int, optional) – The size of the output node dimension. (default: None
)
batch_size (int, optional) – The batch size. (default: None
)
(Tensor
, BoolTensor
)
Examples
>>> x = torch.arange(12).view(6, 2) >>> x tensor([[ 0, 1], [ 2, 3], [ 4, 5], [ 6, 7], [ 8, 9], [10, 11]])
>>> out, mask = to_dense_batch(x) >>> mask tensor([[True, True, True, True, True, True]])
>>> batch = torch.tensor([0, 0, 1, 2, 2, 2]) >>> out, mask = to_dense_batch(x, batch) >>> out tensor([[[ 0, 1], [ 2, 3], [ 0, 0]], [[ 4, 5], [ 0, 0], [ 0, 0]], [[ 6, 7], [ 8, 9], [10, 11]]]) >>> mask tensor([[ True, True, False], [ True, False, False], [ True, True, True]])
>>> out, mask = to_dense_batch(x, batch, max_num_nodes=4) >>> out tensor([[[ 0, 1], [ 2, 3], [ 0, 0], [ 0, 0]], [[ 4, 5], [ 0, 0], [ 0, 0], [ 0, 0]], [[ 6, 7], [ 8, 9], [10, 11], [ 0, 0]]])
>>> mask tensor([[ True, True, False, False], [ True, False, False, False], [ True, True, True, False]])
Converts batched sparse adjacency matrices given by edge indices and edge attributes to a single dense batched adjacency matrix.
edge_index (LongTensor) – The edge indices.
batch (LongTensor, optional) – Batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each node to a specific example. (default: None
)
edge_attr (Tensor, optional) – Edge weights or multi-dimensional edge features. If edge_index
contains duplicated edges, the dense adjacency matrix output holds the summed up entries of edge_attr
for duplicated edges. (default: None
)
max_num_nodes (int, optional) – The size of the output node dimension. (default: None
)
batch_size (int, optional) – The batch size. (default: None
)
Tensor
Examples
>>> edge_index = torch.tensor([[0, 0, 1, 2, 3], ... [0, 1, 0, 3, 0]]) >>> batch = torch.tensor([0, 0, 1, 1]) >>> to_dense_adj(edge_index, batch) tensor([[[1., 1.], [1., 0.]], [[0., 1.], [1., 0.]]])
>>> to_dense_adj(edge_index, batch, max_num_nodes=4) tensor([[[1., 1., 0., 0.], [1., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]], [[0., 1., 0., 0.], [1., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]]])
>>> edge_attr = torch.tensor([1.0, 2.0, 3.0, 4.0, 5.0]) >>> to_dense_adj(edge_index, batch, edge_attr) tensor([[[1., 2.], [3., 0.]], [[0., 4.], [5., 0.]]])
Given a contiguous batch of tensors \(\mathbf{X} \in \mathbb{R}^{(N_1 + \ldots + N_B) \times *}\) (with \(N_i\) indicating the number of elements in example \(i\)), creates a nested PyTorch tensor. Reverse operation of from_nested_tensor()
.
x (torch.Tensor) – The input tensor \(\mathbf{X} \in \mathbb{R}^{(N_1 + \ldots + N_B) \times *}\).
batch (torch.Tensor, optional) – The batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each element to a specific example. Must be ordered. (default: None
)
ptr (torch.Tensor, optional) – Alternative representation of batch
in compressed format. (default: None
)
batch_size (int, optional) – The batch size \(B\). (default: None
)
Given a nested PyTorch tensor, creates a contiguous batch of tensors \(\mathbf{X} \in \mathbb{R}^{(N_1 + \ldots + N_B) \times *}\), and optionally a batch vector which assigns each element to a specific example. Reverse operation of to_nested_tensor()
.
x (torch.Tensor) – The nested input tensor. The size of nested tensors need to match except for the first dimension.
return_batch (bool, optional) – If set to True
, will also return the batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\). (default: False
)
Converts a dense adjacency matrix to a sparse adjacency matrix defined by edge indices and edge attributes.
adj (torch.Tensor) – The dense adjacency matrix of shape [num_nodes, num_nodes]
or [batch_size, num_nodes, num_nodes]
.
mask (torch.Tensor, optional) – A boolean tensor of shape [batch_size, num_nodes]
holding information about which nodes are in each example are valid. (default: None
)
(LongTensor
, Tensor
)
Examples
>>> # For a single adjacency matrix: >>> adj = torch.tensor([[3, 1], ... [2, 0]]) >>> dense_to_sparse(adj) (tensor([[0, 0, 1], [0, 1, 0]]), tensor([3, 1, 2]))
>>> # For two adjacency matrixes: >>> adj = torch.tensor([[[3, 1], ... [2, 0]], ... [[0, 1], ... [0, 2]]]) >>> dense_to_sparse(adj) (tensor([[0, 0, 1, 2, 3], [0, 1, 0, 3, 3]]), tensor([3, 1, 2, 1, 2]))
>>> # First graph with two nodes, second with three: >>> adj = torch.tensor([[ ... [3, 1, 0], ... [2, 0, 0], ... [0, 0, 0] ... ], [ ... [0, 1, 0], ... [0, 2, 3], ... [0, 5, 0] ... ]]) >>> mask = torch.tensor([ ... [True, True, False], ... [True, True, True] ... ]) >>> dense_to_sparse(adj, mask) (tensor([[0, 0, 1, 2, 3, 3, 4], [0, 1, 0, 3, 3, 4, 3]]), tensor([3, 1, 2, 1, 2, 3, 5]))
Returns True
if the input src
is a torch.sparse.Tensor
(in any sparse layout).
src (Any) – The input object to be checked.
Returns True
if the input src
is of type torch.sparse.Tensor
(in any sparse layout) or of type torch_sparse.SparseTensor
.
src (Any) – The input object to be checked.
Converts a sparse adjacency matrix defined by edge indices and edge attributes to a torch.sparse.Tensor
with layout torch.sparse_coo. See to_edge_index()
for the reverse operation.
edge_index (LongTensor) – The edge indices.
edge_attr (Tensor, optional) – The edge attributes. (default: None
)
size (int or (int, int), optional) – The size of the sparse matrix. If given as an integer, will create a quadratic sparse matrix. If set to None
, will infer a quadratic sparse matrix based on edge_index.max() + 1
. (default: None
)
is_coalesced (bool) – If set to True
, will assume that edge_index
is already coalesced and thus avoids expensive computation. (default: False
)
torch.sparse.Tensor
Example
>>> edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) >>> to_torch_coo_tensor(edge_index) tensor(indices=tensor([[0, 1, 1, 2, 2, 3], [1, 0, 2, 1, 3, 2]]), values=tensor([1., 1., 1., 1., 1., 1.]), size=(4, 4), nnz=6, layout=torch.sparse_coo)
Converts a sparse adjacency matrix defined by edge indices and edge attributes to a torch.sparse.Tensor
with layout torch.sparse_csr. See to_edge_index()
for the reverse operation.
edge_index (LongTensor) – The edge indices.
edge_attr (Tensor, optional) – The edge attributes. (default: None
)
size (int or (int, int), optional) – The size of the sparse matrix. If given as an integer, will create a quadratic sparse matrix. If set to None
, will infer a quadratic sparse matrix based on edge_index.max() + 1
. (default: None
)
is_coalesced (bool) – If set to True
, will assume that edge_index
is already coalesced and thus avoids expensive computation. (default: False
)
torch.sparse.Tensor
Example
>>> edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) >>> to_torch_csr_tensor(edge_index) tensor(crow_indices=tensor([0, 1, 3, 5, 6]), col_indices=tensor([1, 0, 2, 1, 3, 2]), values=tensor([1., 1., 1., 1., 1., 1.]), size=(4, 4), nnz=6, layout=torch.sparse_csr)
Converts a sparse adjacency matrix defined by edge indices and edge attributes to a torch.sparse.Tensor
with layout torch.sparse_csc. See to_edge_index()
for the reverse operation.
edge_index (LongTensor) – The edge indices.
edge_attr (Tensor, optional) – The edge attributes. (default: None
)
size (int or (int, int), optional) – The size of the sparse matrix. If given as an integer, will create a quadratic sparse matrix. If set to None
, will infer a quadratic sparse matrix based on edge_index.max() + 1
. (default: None
)
is_coalesced (bool) – If set to True
, will assume that edge_index
is already coalesced and thus avoids expensive computation. (default: False
)
torch.sparse.Tensor
Example
>>> edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) >>> to_torch_csc_tensor(edge_index) tensor(ccol_indices=tensor([0, 1, 3, 5, 6]), row_indices=tensor([1, 0, 2, 1, 3, 2]), values=tensor([1., 1., 1., 1., 1., 1.]), size=(4, 4), nnz=6, layout=torch.sparse_csc)
Converts a sparse adjacency matrix defined by edge indices and edge attributes to a torch.sparse.Tensor
with custom layout
. See to_edge_index()
for the reverse operation.
edge_index (LongTensor) – The edge indices.
edge_attr (Tensor, optional) – The edge attributes. (default: None
)
size (int or (int, int), optional) – The size of the sparse matrix. If given as an integer, will create a quadratic sparse matrix. If set to None
, will infer a quadratic sparse matrix based on edge_index.max() + 1
. (default: None
)
is_coalesced (bool) – If set to True
, will assume that edge_index
is already coalesced and thus avoids expensive computation. (default: False
)
layout (torch.layout, optional) – The layout of the output sparse tensor (torch.sparse_coo
, torch.sparse_csr
, torch.sparse_csc
). (default: torch.sparse_coo
)
torch.sparse.Tensor
Converts a torch.sparse.Tensor
or a torch_sparse.SparseTensor
to edge indices and edge attributes.
adj (torch.sparse.Tensor or SparseTensor) – The adjacency matrix.
Example
>>> edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) >>> adj = to_torch_coo_tensor(edge_index) >>> to_edge_index(adj) (tensor([[0, 1, 1, 2, 2, 3], [1, 0, 2, 1, 3, 2]]), tensor([1., 1., 1., 1., 1., 1.]))
Matrix product of sparse matrix with dense matrix.
src (torch.Tensor or torch_sparse.SparseTensor or EdgeIndex) – The input sparse matrix which can be a PyG torch_sparse.SparseTensor
, a PyTorch torch.sparse.Tensor
or a PyG EdgeIndex
.
other (torch.Tensor) – The input dense matrix.
reduce (str, optional) – The reduce operation to use ("sum"
, "mean"
, "min"
, "max"
). (default: "sum"
)
Tensor
Splits src
according to a batch
vector along dimension dim
.
src (Tensor) – The source tensor.
batch (LongTensor) – The batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each entry in src
to a specific example. Must be ordered.
dim (int, optional) – The dimension along which to split the src
tensor. (default: 0
)
batch_size (int, optional) – The batch size. (default: None
)
List[Tensor]
Example
>>> src = torch.arange(7) >>> batch = torch.tensor([0, 0, 0, 1, 1, 2, 2]) >>> unbatch(src, batch) (tensor([0, 1, 2]), tensor([3, 4]), tensor([5, 6]))
Splits the edge_index
according to a batch
vector.
List[Tensor]
Example
>>> edge_index = torch.tensor([[0, 1, 1, 2, 2, 3, 4, 5, 5, 6], ... [1, 0, 2, 1, 3, 2, 5, 4, 6, 5]]) >>> batch = torch.tensor([0, 0, 0, 0, 1, 1, 1]) >>> unbatch_edge_index(edge_index, batch) (tensor([[0, 1, 1, 2, 2, 3], [1, 0, 2, 1, 3, 2]]), tensor([[0, 1, 1, 2], [1, 0, 2, 1]]))
Taskes a one-dimensional index
tensor and returns a one-hot encoded representation of it with shape [*, num_classes]
that has zeros everywhere except where the index of last dimension matches the corresponding value of the input tensor, in which case it will be 1
.
Note
This is a more memory-efficient version of torch.nn.functional.one_hot()
as you can customize the output dtype
.
index (torch.Tensor) – The one-dimensional input tensor.
num_classes (int, optional) – The total number of classes. If set to None
, the number of classes will be inferred as one greater than the largest class value in the input tensor. (default: None
)
dtype (torch.dtype, optional) – The dtype
of the output tensor.
Computes the normalized cut \(\mathbf{e}_{i,j} \cdot \left( \frac{1}{\deg(i)} + \frac{1}{\deg(j)} \right)\) of a weighted graph given by edge indices and edge attributes.
Tensor
Example
>>> edge_index = torch.tensor([[1, 1, 2, 3], ... [3, 3, 1, 2]]) >>> edge_attr = torch.tensor([1., 1., 1., 1.]) >>> normalized_cut(edge_index, edge_attr) tensor([1.5000, 1.5000, 2.0000, 1.5000])
Returns the edge indices of a two-dimensional grid graph with height height
and width width
and its node positions.
height (int) – The height of the grid.
width (int) – The width of the grid.
dtype (torch.dtype, optional) – The desired data type of the returned position tensor. (default: None
)
device (torch.device, optional) – The desired device of the returned tensors. (default: None
)
(LongTensor
, Tensor
)
Example
>>> (row, col), pos = grid(height=2, width=2) >>> row tensor([0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3]) >>> col tensor([0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3]) >>> pos tensor([[0., 1.], [1., 1.], [0., 0.], [1., 0.]])
Computes (normalized) geodesic distances of a mesh given by pos
and face
. If src
and dst
are given, this method only computes the geodesic distances for the respective source and target node-pairs.
Note
This function requires the gdist
package. To install, run pip install cython && pip install gdist
.
pos (torch.Tensor) – The node positions.
face (torch.Tensor) – The face indices.
src (torch.Tensor, optional) – If given, only compute geodesic distances for the specified source indices. (default: None
)
dst (torch.Tensor, optional) – If given, only compute geodesic distances for the specified target indices. (default: None
)
norm (bool, optional) – Normalizes geodesic distances by \(\sqrt{\textrm{area}(\mathcal{M})}\). (default: True
)
max_distance (float, optional) – If given, only yields results for geodesic distances less than max_distance
. This will speed up runtime dramatically. (default: None
)
num_workers (int, optional) – How many subprocesses to use for calculating geodesic distances. 0
means that computation takes place in the main process. -1
means that the available amount of CPU cores is used. (default: 0
)
Tensor
Example
>>> pos = torch.tensor([[0.0, 0.0, 0.0], ... [2.0, 0.0, 0.0], ... [0.0, 2.0, 0.0], ... [2.0, 2.0, 0.0]]) >>> face = torch.tensor([[0, 0], ... [1, 2], ... [3, 3]]) >>> geodesic_distance(pos, face) [[0, 1, 1, 1.4142135623730951], [1, 0, 1.4142135623730951, 1], [1, 1.4142135623730951, 0, 1], [1.4142135623730951, 1, 1, 0]]
Converts a graph given by edge indices and edge attributes to a scipy sparse matrix.
Examples
>>> edge_index = torch.tensor([ ... [0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2], ... ]) >>> to_scipy_sparse_matrix(edge_index) <4x4 sparse matrix of type '<class 'numpy.float32'>' with 6 stored elements in COOrdinate format>
Converts a scipy sparse matrix to edge indices and edge attributes.
A (scipy.sparse) – A sparse matrix.
Examples
>>> edge_index = torch.tensor([ ... [0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2], ... ]) >>> adj = to_scipy_sparse_matrix(edge_index) >>> # `edge_index` and `edge_weight` are both returned >>> from_scipy_sparse_matrix(adj) (tensor([[0, 1, 1, 2, 2, 3], [1, 0, 2, 1, 3, 2]]), tensor([1., 1., 1., 1., 1., 1.]))
Converts a torch_geometric.data.Data
instance to a networkx.Graph
if to_undirected
is set to True
, or a directed networkx.DiGraph
otherwise.
data (torch_geometric.data.Data or torch_geometric.data.HeteroData) – A homogeneous or heterogeneous data object.
node_attrs (iterable of str, optional) – The node attributes to be copied. (default: None
)
edge_attrs (iterable of str, optional) – The edge attributes to be copied. (default: None
)
graph_attrs (iterable of str, optional) – The graph attributes to be copied. (default: None
)
to_undirected (bool or str, optional) – If set to True
, will return a networkx.Graph
instead of a networkx.DiGraph
. By default, will include all edges and make them undirected. If set to "upper"
, the undirected graph will only correspond to the upper triangle of the input adjacency matrix. If set to "lower"
, the undirected graph will only correspond to the lower triangle of the input adjacency matrix. Only applicable in case the data
object holds a homogeneous graph. (default: False
)
to_multi (bool, optional) – if set to True
, will return a networkx.MultiGraph
or a networkx:MultiDiGraph
(depending on the to_undirected
option), which will not drop duplicated edges that may exist in data
. (default: False
)
remove_self_loops (bool, optional) – If set to True
, will not include self-loops in the resulting graph. (default: False
)
Examples
>>> edge_index = torch.tensor([ ... [0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2], ... ]) >>> data = Data(edge_index=edge_index, num_nodes=4) >>> to_networkx(data) <networkx.classes.digraph.DiGraph at 0x2713fdb40d0>
Converts a networkx.Graph
or networkx.DiGraph
to a torch_geometric.data.Data
instance.
G (networkx.Graph or networkx.DiGraph) – A networkx graph.
group_node_attrs (List[str] or "all", optional) – The node attributes to be concatenated and added to data.x
. (default: None
)
group_edge_attrs (List[str] or "all", optional) – The edge attributes to be concatenated and added to data.edge_attr
. (default: None
)
Note
All group_node_attrs
and group_edge_attrs
values must be numeric.
Examples
>>> edge_index = torch.tensor([ ... [0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2], ... ]) >>> data = Data(edge_index=edge_index, num_nodes=4) >>> g = to_networkx(data) >>> # A `Data` object is returned >>> from_networkx(g) Data(edge_index=[2, 6], num_nodes=4)
Converts a (edge_index, edge_weight)
tuple to a networkit.Graph
.
edge_index (torch.Tensor) – The edge indices of the graph.
edge_weight (torch.Tensor, optional) – The edge weights of the graph. (default: None
)
num_nodes (int, optional) – The number of nodes in the graph. (default: None
)
directed (bool, optional) – If set to False
, the graph will be undirected. (default: True
)
Converts a networkit.Graph
to a (edge_index, edge_weight)
tuple. If the networkit.Graph
is not weighted, the returned edge_weight
will be None
.
Converts a torch_geometric.data.Data
instance to a trimesh.Trimesh
.
data (torch_geometric.data.Data) – The data object.
Example
>>> pos = torch.tensor([[0, 0, 0], [1, 0, 0], [0, 1, 0], [1, 1, 0]], ... dtype=torch.float) >>> face = torch.tensor([[0, 1, 2], [1, 2, 3]]).t()
>>> data = Data(pos=pos, face=face) >>> to_trimesh(data) <trimesh.Trimesh(vertices.shape=(4, 3), faces.shape=(2, 3))>
Converts a trimesh.Trimesh
to a torch_geometric.data.Data
instance.
mesh (trimesh.Trimesh) – A trimesh
mesh.
Example
>>> pos = torch.tensor([[0, 0, 0], [1, 0, 0], [0, 1, 0], [1, 1, 0]], ... dtype=torch.float) >>> face = torch.tensor([[0, 1, 2], [1, 2, 3]]).t()
>>> data = Data(pos=pos, face=face) >>> mesh = to_trimesh(data) >>> from_trimesh(mesh) Data(pos=[4, 3], face=[3, 2])
Converts a graph given by edge_index
and optional edge_weight
into a cugraph
graph object.
edge_index (torch.Tensor) – The edge indices of the graph.
edge_weight (torch.Tensor, optional) – The edge weights of the graph. (default: None
)
relabel_nodes (bool, optional) – If set to True
, cugraph
will remove any isolated nodes, leading to a relabeling of nodes. (default: True
)
directed (bool, optional) – If set to False
, the graph will be undirected. (default: True
)
Converts a cugraph
graph object into edge_index
and optional edge_weight
tensors.
Converts a torch_geometric.data.Data
or torch_geometric.data.HeteroData
instance to a dgl
graph object.
data (torch_geometric.data.Data or torch_geometric.data.HeteroData) – The data object.
Example
>>> edge_index = torch.tensor([[0, 1, 1, 2, 3, 0], [1, 0, 2, 1, 4, 4]]) >>> x = torch.randn(5, 3) >>> edge_attr = torch.randn(6, 2) >>> data = Data(x=x, edge_index=edge_index, edge_attr=y) >>> g = to_dgl(data) >>> g Graph(num_nodes=5, num_edges=6, ndata_schemes={'x': Scheme(shape=(3,))} edata_schemes={'edge_attr': Scheme(shape=(2, ))})
>>> data = HeteroData() >>> data['paper'].x = torch.randn(5, 3) >>> data['author'].x = torch.ones(5, 3) >>> edge_index = torch.tensor([[0, 1, 2, 3, 4], [0, 1, 2, 3, 4]]) >>> data['author', 'cites', 'paper'].edge_index = edge_index >>> g = to_dgl(data) >>> g Graph(num_nodes={'author': 5, 'paper': 5}, num_edges={('author', 'cites', 'paper'): 5}, metagraph=[('author', 'paper', 'cites')])
Converts a dgl
graph object to a torch_geometric.data.Data
or torch_geometric.data.HeteroData
instance.
g (dgl.DGLGraph) – The dgl
graph object.
Example
>>> g = dgl.graph(([0, 0, 1, 5], [1, 2, 2, 0])) >>> g.ndata['x'] = torch.randn(g.num_nodes(), 3) >>> g.edata['edge_attr'] = torch.randn(g.num_edges(), 2) >>> data = from_dgl(g) >>> data Data(x=[6, 3], edge_attr=[4, 2], edge_index=[2, 4])
>>> g = dgl.heterograph({ >>> g = dgl.heterograph({ ... ('author', 'writes', 'paper'): ([0, 1, 1, 2, 3, 3, 4], ... [0, 0, 1, 1, 1, 2, 2])}) >>> g.nodes['author'].data['x'] = torch.randn(5, 3) >>> g.nodes['paper'].data['x'] = torch.randn(5, 3) >>> data = from_dgl(g) >>> data HeteroData( author={ x=[5, 3] }, paper={ x=[3, 3] }, (author, writes, paper)={ edge_index=[2, 7] } )
Converts a rdkit.Chem.Mol
instance to a torch_geometric.data.Data
instance.
mol (rdkit.Chem.Mol) – The rdkit
molecule.
Converts a torch_geometric.data.Data
instance to a rdkit.Chem.Mol
instance.
data (torch_geometric.data.Data) – The molecular graph data.
kekulize (bool, optional) – If set to True
, converts aromatic bonds to single/double bonds. (default: False
)
Converts a SMILES string to a torch_geometric.data.Data
instance.
Converts a torch_geometric.data.Data
instance to a SMILES string.
data (torch_geometric.data.Data) – The molecular graph.
kekulize (bool, optional) – If set to True
, converts aromatic bonds to single/double bonds. (default: False
)
Returns the edge_index
of a random Erdos-Renyi graph.
Examples
>>> erdos_renyi_graph(5, 0.2, directed=False) tensor([[0, 1, 1, 4], [1, 0, 4, 1]])
>>> erdos_renyi_graph(5, 0.2, directed=True) tensor([[0, 1, 3, 3, 4, 4], [4, 3, 1, 2, 1, 3]])
Returns the edge_index
of a stochastic blockmodel graph.
Examples
>>> block_sizes = [2, 2, 4] >>> edge_probs = [[0.25, 0.05, 0.02], ... [0.05, 0.35, 0.07], ... [0.02, 0.07, 0.40]] >>> stochastic_blockmodel_graph(block_sizes, edge_probs, ... directed=False) tensor([[2, 4, 4, 5, 5, 6, 7, 7], [5, 6, 7, 2, 7, 4, 4, 5]])
>>> stochastic_blockmodel_graph(block_sizes, edge_probs, ... directed=True) tensor([[0, 2, 3, 4, 4, 5, 5], [3, 4, 1, 5, 6, 6, 7]])
Returns the edge_index
of a Barabasi-Albert preferential attachment model, where a graph of num_nodes
nodes grows by attaching new nodes with num_edges
edges that are preferentially attached to existing nodes with high degree.
Example
>>> barabasi_albert_graph(num_nodes=4, num_edges=3) tensor([[0, 0, 0, 1, 1, 2, 2, 3], [1, 2, 3, 0, 2, 0, 1, 0]])
Samples random negative edges of a graph given by edge_index
.
edge_index (LongTensor) – The edge indices.
num_nodes (int or Tuple[int, int], optional) – The number of nodes, i.e. max_val + 1
of edge_index
. If given as a tuple, then edge_index
is interpreted as a bipartite graph with shape (num_src_nodes, num_dst_nodes)
. (default: None
)
num_neg_samples (int or float, optional) – The (approximate) number of negative samples to return. If set to a floating-point value, it represents the ratio of negative samples to generate based on the number of positive edges. If set to None
, will try to return a negative edge for every positive edge. (default: None
)
method (str, optional) – The method to use for negative sampling, i.e. "sparse"
or "dense"
. This is a memory/runtime trade-off. "sparse"
will work on any graph of any size, while "dense"
can perform faster true-negative checks. (default: "sparse"
)
force_undirected (bool, optional) – If set to True
, sampled negative edges will be undirected. (default: False
)
LongTensor
Examples
>>> # Standard usage >>> edge_index = torch.as_tensor([[0, 0, 1, 2], ... [0, 1, 2, 3]]) >>> negative_sampling(edge_index) tensor([[3, 0, 0, 3], [2, 3, 2, 1]])
>>> negative_sampling(edge_index, num_nodes=(3, 4), ... num_neg_samples=0.5) # 50% of positive edges tensor([[0, 3], [3, 0]])
>>> # For bipartite graph >>> negative_sampling(edge_index, num_nodes=(3, 4)) tensor([[0, 2, 2, 1], [2, 2, 1, 3]])
Samples random negative edges of multiple graphs given by edge_index
and batch
.
edge_index (LongTensor) – The edge indices.
batch (LongTensor or Tuple[LongTensor, LongTensor]) – Batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each node to a specific example. If given as a tuple, then edge_index
is interpreted as a bipartite graph connecting two different node types.
num_neg_samples (int or float, optional) – The number of negative samples to return. If set to None
, will try to return a negative edge for every positive edge. If float, it will generate num_neg_samples * num_edges
negative samples. (default: None
)
method (str, optional) – The method to use for negative sampling, i.e. "sparse"
or "dense"
. This is a memory/runtime trade-off. "sparse"
will work on any graph of any size, while "dense"
can perform faster true-negative checks. (default: "sparse"
)
force_undirected (bool, optional) – If set to True
, sampled negative edges will be undirected. (default: False
)
LongTensor
Examples
>>> # Standard usage >>> edge_index = torch.as_tensor([[0, 0, 1, 2], [0, 1, 2, 3]]) >>> edge_index = torch.cat([edge_index, edge_index + 4], dim=1) >>> edge_index tensor([[0, 0, 1, 2, 4, 4, 5, 6], [0, 1, 2, 3, 4, 5, 6, 7]]) >>> batch = torch.tensor([0, 0, 0, 0, 1, 1, 1, 1]) >>> batched_negative_sampling(edge_index, batch) tensor([[3, 1, 3, 2, 7, 7, 6, 5], [2, 0, 1, 1, 5, 6, 4, 4]])
>>> # Using float multiplier for negative samples >>> batched_negative_sampling(edge_index, batch, num_neg_samples=1.5) tensor([[3, 1, 3, 2, 7, 7, 6, 5, 2, 0, 1, 1], [2, 0, 1, 1, 5, 6, 4, 4, 3, 2, 3, 0]])
>>> # For bipartite graph >>> edge_index1 = torch.as_tensor([[0, 0, 1, 1], [0, 1, 2, 3]]) >>> edge_index2 = edge_index1 + torch.tensor([[2], [4]]) >>> edge_index3 = edge_index2 + torch.tensor([[2], [4]]) >>> edge_index = torch.cat([edge_index1, edge_index2, ... edge_index3], dim=1) >>> edge_index tensor([[ 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5], [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) >>> src_batch = torch.tensor([0, 0, 1, 1, 2, 2]) >>> dst_batch = torch.tensor([0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2]) >>> batched_negative_sampling(edge_index, ... (src_batch, dst_batch)) tensor([[ 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5], [ 2, 3, 0, 1, 6, 7, 4, 5, 10, 11, 8, 9]])
Samples a negative edge (i,k)
for every positive edge (i,j)
in the graph given by edge_index
, and returns it as a tuple of the form (i,j,k)
.
(LongTensor, LongTensor, LongTensor)
Example
>>> edge_index = torch.as_tensor([[0, 0, 1, 2], ... [0, 1, 2, 3]]) >>> structured_negative_sampling(edge_index) (tensor([0, 0, 1, 2]), tensor([0, 1, 2, 3]), tensor([2, 3, 0, 2]))
Returns True
if structured_negative_sampling()
is feasible on the graph given by edge_index
. structured_negative_sampling()
is infeasible if at least one node is connected to all other nodes.
Examples
>>> edge_index = torch.LongTensor([[0, 0, 1, 1, 2, 2, 2], ... [1, 2, 0, 2, 0, 1, 1]]) >>> structured_negative_sampling_feasible(edge_index, 3, False) False
>>> structured_negative_sampling_feasible(edge_index, 3, True) True
Randomly shuffle the feature matrix x
along the first dimension.
The method returns (1) the shuffled x
, (2) the permutation indicating the orders of original nodes after shuffling.
(FloatTensor
, LongTensor
)
Example
>>> # Standard case >>> x = torch.tensor([[0, 1, 2], ... [3, 4, 5], ... [6, 7, 8], ... [9, 10, 11]], dtype=torch.float) >>> x, node_perm = shuffle_node(x) >>> x tensor([[ 3., 4., 5.], [ 9., 10., 11.], [ 0., 1., 2.], [ 6., 7., 8.]]) >>> node_perm tensor([1, 3, 0, 2])
>>> # For batched graphs as inputs >>> batch = torch.tensor([0, 0, 1, 1]) >>> x, node_perm = shuffle_node(x, batch) >>> x tensor([[ 3., 4., 5.], [ 0., 1., 2.], [ 9., 10., 11.], [ 6., 7., 8.]]) >>> node_perm tensor([1, 0, 3, 2])
Randomly masks feature from the feature matrix x
with probability p
using samples from a Bernoulli distribution.
The method returns (1) the retained x
, (2) the feature mask broadcastable with x
(mode='row'
and mode='col'
) or with the same shape as x
(mode='all'
), indicating where features are retained.
x (FloatTensor) – The feature matrix.
p (float, optional) – The masking ratio. (default: 0.5
)
mode (str, optional) – The masked scheme to use for feature masking. ("row"
, "col"
or "all"
). If mode='col'
, will mask entire features of all nodes from the feature matrix. If mode='row'
, will mask entire nodes from the feature matrix. If mode='all'
, will mask individual features across all nodes. (default: 'col'
)
fill_value (float, optional) – The value for masked features in the output tensor. (default: 0
)
training (bool, optional) – If set to False
, this operation is a no-op. (default: True
)
(FloatTensor
, BoolTensor
)
Examples
>>> # Masked features are column-wise sampled >>> x = torch.tensor([[1, 2, 3], ... [4, 5, 6], ... [7, 8, 9]], dtype=torch.float) >>> x, feat_mask = mask_feature(x) >>> x tensor([[1., 0., 3.], [4., 0., 6.], [7., 0., 9.]]), >>> feat_mask tensor([[True, False, True]])
>>> # Masked features are row-wise sampled >>> x, feat_mask = mask_feature(x, mode='row') >>> x tensor([[1., 2., 3.], [0., 0., 0.], [7., 8., 9.]]), >>> feat_mask tensor([[True], [False], [True]])
>>> # Masked features are uniformly sampled >>> x, feat_mask = mask_feature(x, mode='all') >>> x tensor([[0., 0., 0.], [4., 0., 6.], [0., 0., 9.]]) >>> feat_mask tensor([[False, False, False], [True, False, True], [False, False, True]])
Randomly adds edges to edge_index
.
The method returns (1) the retained edge_index
, (2) the added edge indices.
edge_index (LongTensor) – The edge indices.
p (float) – Ratio of added edges to the existing edges. (default: 0.5
)
force_undirected (bool, optional) – If set to True
, added edges will be undirected. (default: False
)
num_nodes (int, Tuple[int], optional) – The overall number of nodes, i.e. max_val + 1
, or the number of source and destination nodes, i.e. (max_src_val + 1, max_dst_val + 1)
of edge_index
. (default: None
)
training (bool, optional) – If set to False
, this operation is a no-op. (default: True
)
(LongTensor
, LongTensor
)
Examples
>>> # Standard case >>> edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], ... [1, 0, 2, 1, 3, 2]]) >>> edge_index, added_edges = add_random_edge(edge_index, p=0.5) >>> edge_index tensor([[0, 1, 1, 2, 2, 3, 2, 1, 3], [1, 0, 2, 1, 3, 2, 0, 2, 1]]) >>> added_edges tensor([[2, 1, 3], [0, 2, 1]])
>>> # The returned graph is kept undirected >>> edge_index, added_edges = add_random_edge(edge_index, p=0.5, ... force_undirected=True) >>> edge_index tensor([[0, 1, 1, 2, 2, 3, 2, 1, 3, 0, 2, 1], [1, 0, 2, 1, 3, 2, 0, 2, 1, 2, 1, 3]]) >>> added_edges tensor([[2, 1, 3, 0, 2, 1], [0, 2, 1, 2, 1, 3]])
>>> # For bipartite graphs >>> edge_index = torch.tensor([[0, 1, 2, 3, 4, 5], ... [2, 3, 1, 4, 2, 1]]) >>> edge_index, added_edges = add_random_edge(edge_index, p=0.5, ... num_nodes=(6, 5)) >>> edge_index tensor([[0, 1, 2, 3, 4, 5, 3, 4, 1], [2, 3, 1, 4, 2, 1, 1, 3, 2]]) >>> added_edges tensor([[3, 4, 1], [1, 3, 2]])
The tree decomposition algorithm of molecules from the “Junction Tree Variational Autoencoder for Molecular Graph Generation” paper. Returns the graph connectivity of the junction tree, the assignment mapping of each atom to the clique in the junction tree, and the number of cliques.
(LongTensor, LongTensor, int)
if return_vocab
is False
, else (LongTensor, LongTensor, int, LongTensor)
Returns the output embeddings of all MessagePassing
layers in model
.
Internally, this method registers forward hooks on all MessagePassing
layers of a model
, and runs the forward pass of the model
by calling model(*args, **kwargs)
.
model (torch.nn.Module) – The message passing model.
*args – Arguments passed to the model.
**kwargs (optional) – Additional keyword arguments passed to the model.
Returns the output embeddings of all MessagePassing
layers in a heterogeneous model
, organized by edge type.
Internally, this method registers forward hooks on all modules that process heterogeneous graphs in the model and runs the forward pass of the model. For heterogeneous models, the output is a dictionary where each key is a node type and each value is a list of embeddings from different layers.
model (torch.nn.Module) – The heterogeneous GNN model.
supported_models (List[Type[torch.nn.Module]], optional) – A list of supported model classes. If not provided, defaults to [HGTConv, HANConv, HeteroConv].
*args – Arguments passed to the model.
**kwargs (optional) – Additional keyword arguments passed to the model.
A dictionary mapping each node type to a list of embeddings from different layers.
Dict[NodeType, List[Tensor]]
Trims the edge_index
representation, node features x
and edge features edge_attr
to a minimal-sized representation for the current GNN layer layer
in directed NeighborLoader
scenarios.
This ensures that no computation is performed for nodes and edges that are not included in the current GNN layer, thus avoiding unnecessary computation within the GNN when performing neighborhood sampling.
layer (int) – The current GNN layer.
num_sampled_nodes_per_hop (List[int] or Dict[NodeType, List[int]]) – The number of sampled nodes per hop.
num_sampled_edges_per_hop (List[int] or Dict[EdgeType, List[int]]) – The number of sampled edges per hop.
x (torch.Tensor or Dict[NodeType, torch.Tensor]) – The homogeneous or heterogeneous (hidden) node features.
edge_index (torch.Tensor or Dict[EdgeType, torch.Tensor]) – The homogeneous or heterogeneous edge indices.
edge_attr (torch.Tensor or Dict[EdgeType, torch.Tensor], optional) – The homogeneous or heterogeneous (hidden) edge features.
Tuple
[Union
[Tensor
, Dict
[str
, Tensor
]], Union
[Tensor
, Dict
[Tuple
[str
, str
, str
], Union
[Tensor
, SparseTensor
]]], Union
[Tensor
, Dict
[Tuple
[str
, str
, str
], Tensor
], None
]]
Calculates the personalized PageRank (PPR) vector for all or a subset of nodes using a variant of the Andersen algorithm.
edge_index (torch.Tensor) – The indices of the graph.
alpha (float, optional) – The alpha value of the PageRank algorithm. (default: 0.2
)
eps (float, optional) – The threshold for stopping the PPR calculation (edge_weight >= eps * out_degree
). (default: 1e-5
)
target (torch.Tensor, optional) – The target nodes to compute PPR for. If not given, calculates PPR vectors for all nodes. (default: None
)
num_nodes (int, optional) – The number of nodes. (default: None
)
Splits the edges of a torch_geometric.data.Data
object into positive and negative train/val/test edges. As such, it will replace the edge_index
attribute with train_pos_edge_index
, train_pos_neg_adj_mask
, val_pos_edge_index
, val_neg_edge_index
and test_pos_edge_index
attributes. If data
has edge features named edge_attr
, then train_pos_edge_attr
, val_pos_edge_attr
and test_pos_edge_attr
will be added as well.
Compute Jacobian‑based influence aggregates for multiple seed nodes, as introduced in the “Towards Quantifying Long-Range Interactions in Graph Machine Learning: a Large Graph Dataset and a Measurement” paper. This measurement quantifies how a GNN model’s output at a node is influenced by features of other nodes at increasing hop distances.
Specifically, for every sampled node \(v\), this method
evaluates the L1‑norm of the Jacobian of the model output at \(v\) w.r.t. the node features of its k-hop induced sub‑graph;
sums these scores per hop to obtain the influence vector \((I_{0}, I_{1}, \dots, I_{k})\);
optionally averages those vectors over all sampled nodes and optionally normalises them by \(I_{0}\).
Please refer to Section 4 of the paper for a more detailed definition.
model (torch.nn.Module) – A PyTorch Geometric‑compatible model with forward signature model(x, edge_index) -> Tensor
.
data (torch_geometric.data.Data) – Graph data object providing at least x
(node features) and edge_index
(connectivity).
max_hops (int) – Maximum hop distance \(k\).
num_samples (int, optional) – Number of random seed nodes to evaluate. If None
, all nodes are used. (default: None
)
normalize (bool, optional) – If True
, normalize each hop‑wise influence by the influence of hop 0. (default: True
)
average (bool, optional) – If True
, return the hop‑wise mean over all seed nodes (shape [k+1]
). If False
, return the full influence matrix of shape [N, k+1]
. (default: True
)
device (torch.device or str, optional) – Device on which to perform the computation. (default: "cpu"
)
vectorize (bool, optional) – Forwarded to torch.autograd.functional.jacobian()
. Keeping this True
is often faster but increases memory usage. (default: True
)
avg_influence (Tensor): shape [k+1]
if average=True
; shape [N, k+1]
otherwise.
R (float): Influence‑weighted receptive‑field breadth returned by influence_weighted_receptive_field()
.
Tuple[Tensor, float]
>>> avg_I, R = total_influence(model, data, max_hops=3, ... num_samples=1000) >>> avg_I tensor([1.0000, 0.1273, 0.0142, 0.0019]) >>> R 0.216
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4