Bases: MessagePassing
The continuous kernel-based convolutional operator from the “Neural Message Passing for Quantum Chemistry” paper.
This convolution is also known as the edge-conditioned convolution from the “Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs” paper (see torch_geometric.nn.conv.ECConv
for an alias):
\[\mathbf{x}^{\prime}_i = \mathbf{\Theta} \mathbf{x}_i + \sum_{j \in \mathcal{N}(i)} \mathbf{x}_j \cdot h_{\mathbf{\Theta}}(\mathbf{e}_{i,j}),\]
where \(h_{\mathbf{\Theta}}\) denotes a neural network, .i.e. a MLP.
in_channels (int or tuple) – Size of each input sample, or -1
to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities.
out_channels (int) – Size of each output sample.
nn (torch.nn.Module) – A neural network \(h_{\mathbf{\Theta}}\) that maps edge features edge_attr
of shape [-1, num_edge_features]
to shape [-1, in_channels * out_channels]
, e.g., defined by torch.nn.Sequential
.
aggr (str, optional) – The aggregation scheme to use ("add"
, "mean"
, "max"
). (default: "add"
)
root_weight (bool, optional) – If set to False
, the layer will not add the transformed root node features to the output. (default: True
)
bias (bool, optional) – If set to False
, the layer will not learn an additive bias. (default: True
)
**kwargs (optional) – Additional arguments of torch_geometric.nn.conv.MessagePassing
.
input: node features \((|\mathcal{V}|, F_{in})\) or \(((|\mathcal{V_s}|, F_{s}), (|\mathcal{V_t}|, F_{t}))\) if bipartite, edge indices \((2, |\mathcal{E}|)\), edge features \((|\mathcal{E}|, D)\) (optional)
output: node features \((|\mathcal{V}|, F_{out})\) or \((|\mathcal{V}_t|, F_{out})\) if bipartite
Runs the forward pass of the module.
Resets all learnable parameters of the module.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4