Bases: MessagePassing
The ARMA graph convolutional operator from the “Graph Neural Networks with Convolutional ARMA Filters” paper.
\[\mathbf{X}^{\prime} = \frac{1}{K} \sum_{k=1}^K \mathbf{X}_k^{(T)},\]
with \(\mathbf{X}_k^{(T)}\) being recursively defined by
\[\mathbf{X}_k^{(t+1)} = \sigma \left( \mathbf{\hat{L}} \mathbf{X}_k^{(t)} \mathbf{W} + \mathbf{X}^{(0)} \mathbf{V} \right),\]
where \(\mathbf{\hat{L}} = \mathbf{I} - \mathbf{L} = \mathbf{D}^{-1/2} \mathbf{A} \mathbf{D}^{-1/2}\) denotes the modified Laplacian \(\mathbf{L} = \mathbf{I} - \mathbf{D}^{-1/2} \mathbf{A} \mathbf{D}^{-1/2}\).
in_channels (int) – Size of each input sample, or -1
to derive the size from the first input(s) to the forward method.
out_channels (int) – Size of each output sample \(\mathbf{x}^{(t+1)}\).
num_stacks (int, optional) – Number of parallel stacks \(K\). (default: 1
).
num_layers (int, optional) – Number of layers \(T\). (default: 1
)
act (callable, optional) – Activation function \(\sigma\). (default: torch.nn.ReLU()
)
shared_weights (int, optional) – If set to True
the layers in each stack will share the same parameters. (default: False
)
dropout (float, optional) – Dropout probability of the skip connection. (default: 0.
)
bias (bool, optional) – If set to False
, the layer will not learn an additive bias. (default: True
)
**kwargs (optional) – Additional arguments of torch_geometric.nn.conv.MessagePassing
.
input: node features \((|\mathcal{V}|, F_{in})\), edge indices \((2, |\mathcal{E}|)\), edge weights \((|\mathcal{E}|)\) (optional)
output: node features \((|\mathcal{V}|, F_{out})\)
Runs the forward pass of the module.
Resets all learnable parameters of the module.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4