Bases: Module
Applies graph normalization over individual graphs as described in the “GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training” paper.
\[\mathbf{x}^{\prime}_i = \frac{\mathbf{x} - \alpha \odot \textrm{E}[\mathbf{x}]} {\sqrt{\textrm{Var}[\mathbf{x} - \alpha \odot \textrm{E}[\mathbf{x}]] + \epsilon}} \odot \gamma + \beta\]
where \(\alpha\) denotes parameters that learn how much information to keep in the mean.
in_channels (int) – Size of each input sample.
eps (float, optional) – A value added to the denominator for numerical stability. (default: 1e-5
)
device (torch.device, optional) – The device to use for the module. (default: None
)
Resets all learnable parameters of the module.
Forward pass.
x (torch.Tensor) – The source tensor.
batch (torch.Tensor, optional) – The batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each element to a specific example. (default: None
)
batch_size (int, optional) – The number of examples \(B\). Automatically calculated if not given. (default: None
)
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4