Computes the first n columns of a product of Householder matrices.
Let K \mathbb{K} K be R \mathbb{R} R or C \mathbb{C} C, and let A ∈ K m × n A \in \mathbb{K}^{m \times n} A∈Km×n be a matrix with columns a i ∈ Km a_i \in \mathbb{K}^m ai∈Km for i = 1 , … , m i=1,\ldots,m i=1,…,m with m ≥ n m \geq n m≥n. Denote by b i b_i bi the vector resulting from zeroing out the first i − 1 i-1 i−1 components of a i a_i ai and setting to 1 the i i i-th. For a vector τ ∈ Kk \tau \in \mathbb{K}^k τ∈Kk with k ≤ n k \leq n k≤n, this function computes the first n n n columns of the matrix
H 1 H 2 . . . H k with H i = I m − τ i b i b i H H_1H_2 ... H_k \qquad\text{with}\qquad H_i = \mathrm{I}_m - \tau_i b_i b_i^{\text{H}} H1H2...HkwithHi=Im−τibibiH
where I m \mathrm{I}_m Im is the m-dimensional identity matrix and bH b^{\text{H}} bH is the conjugate transpose when b b b is complex, and the transpose when b b b is real-valued. The output matrix is the same size as the input matrix A
.
See Representation of Orthogonal or Unitary Matrices for further details.
Supports inputs of float, double, cfloat and cdouble dtypes. Also supports batches of matrices, and if the inputs are batches of matrices then the output has the same batch dimensions.
See also
torch.geqrf()
can be used together with this function to form the Q from the qr()
decomposition.
torch.ormqr()
is a related function that computes the matrix multiplication of a product of Householder matrices with another matrix. However, that function is not supported by autograd.
Warning
Gradient computations are only well-defined if τ i ≠ 1 ∣ ∣ a i ∣ ∣2 \tau_i \neq \frac{1}{||a_i||^2} τi=∣∣ai∣∣21. If this condition is not met, no error will be thrown, but the gradient produced may contain NaN.
out (Tensor, optional) – output tensor. Ignored if None. Default: None.
RuntimeError – if A
doesn’t satisfy the requirement m >= n, or tau
doesn’t satisfy the requirement n >= k.
Examples:
>>> A = torch.randn(2, 2) >>> h, tau = torch.geqrf(A) >>> Q = torch.linalg.householder_product(h, tau) >>> torch.dist(Q, torch.linalg.qr(A).Q) tensor(0.) >>> h = torch.randn(3, 2, 2, dtype=torch.complex128) >>> tau = torch.randn(3, 1, dtype=torch.complex128) >>> Q = torch.linalg.householder_product(h, tau) >>> Q tensor([[[ 1.8034+0.4184j, 0.2588-1.0174j], [-0.6853+0.7953j, 2.0790+0.5620j]], [[ 1.4581+1.6989j, -1.5360+0.1193j], [ 1.3877-0.6691j, 1.3512+1.3024j]], [[ 1.4766+0.5783j, 0.0361+0.6587j], [ 0.6396+0.1612j, 1.3693+0.4481j]]], dtype=torch.complex128)
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4