Return the singular value decomposition (U, S, V)
of a matrix, batches of matrices, or a sparse matrix A A A such that A ≈ U diag ( S ) VH A \approx U \operatorname{diag}(S) V^{\text{H}} A≈Udiag(S)VH. In case M M M is given, then SVD is computed for the matrix A − M A - M A−M.
Note
The implementation is based on the Algorithm 5.1 from Halko et al., 2009.
Note
For an adequate approximation of a k-rank matrix A A A, where k is not known in advance but could be estimated, the number of Q Q Q columns, q, can be choosen according to the following criteria: in general, k < = q < = m i n ( 2 ∗ k , m , n ) k <= q <= min(2*k, m, n) k<=q<=min(2∗k,m,n). For large low-rank matrices, take q = k + 5..10 q = k + 5..10 q=k+5..10. If k is relatively small compared to m i n ( m , n ) min(m, n) min(m,n), choosing q = k + 0..2 q = k + 0..2 q=k+0..2 may be sufficient.
Note
This is a randomized method. To obtain repeatable results, set the seed for the pseudorandom number generator
Note
In general, use the full-rank SVD implementation torch.linalg.svd()
for dense matrices due to its 10x higher performance characteristics. The low-rank SVD will be useful for huge sparse matrices that torch.linalg.svd()
cannot handle.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4