Matrix multiplication. More...
AFAPI array matmul (const array &lhs, const array &rhs, const matProp optLhs=AF_MAT_NONE, const matProp optRhs=AF_MAT_NONE) C++ Interface to multiply two matrices. More...Matrix multiplication.
Performs a matrix multiplication on the two input arrays after performing the operations specified in the options. The operations are done while reading the data from memory. This results in no additional memory being used for temporary buffers.
Batched matrix multiplications are supported. The supported types of batch operations for any given set of two matrices A and B are given below,
Size of Input Matrix A Size of Input Matrix B Output Matrix Size \( \{ M, K, 1, 1 \} \) \( \{ K, N, 1, 1 \} \) \( \{ M, N, 1, 1 \} \) \( \{ M, K, b2, b3 \} \) \( \{ K, N, b2, b3 \} \) \( \{ M, N, b2, b3 \} \) \( \{ M, K, 1, 1 \} \) \( \{ K, N, b2, b3 \} \) \( \{ M, N, b2, b3 \} \) \( \{ M, K, b2, b3 \} \) \( \{ K, N, 1, 1 \} \) \( \{ M, N, b2, b3 \} \)where M
, K
, N
are dimensions of the matrix and b2
, b3
indicate batch size along the respective dimension.
For the last two entries in the above table, the 2D matrix is broadcasted to match the dimensions of 3D/4D array. This broadcast doesn't involve any additional memory allocations either on host or device.
C Interface to multiply two matrices.
This provides an interface to the BLAS level 3 general matrix multiply (GEMM) of two af_array objects, which is generally defined as:
\[ C = \alpha * opA(A)opB(B) + \beta * C \]
where \(\alpha\) (alpha
) and \(\beta\) (beta
) are both scalars; \(A\) and \(B\) are the matrix multiply operands; and \(opA\) and \(opB\) are noop (if AF_MAT_NONE
) or transpose (if AF_MAT_TRANS
) operations on \(A\) or \(B\) before the actual GEMM operation. Batched GEMM is supported if at least either \(A\) or \(B\) have more than two dimensions (see af::matmul for more details on broadcasting). However, only one alpha
and one beta
can be used for all of the batched matrix operands.
The af_array that out
points to can be used both as an input and output. An allocation will be performed if you pass a null af_array handle (i.e. af_array c = 0;
). If a valid af_array is passed as \(C\), the operation will be performed on that af_array itself. The C af_array must be the correct type and shape; otherwise, an error will be thrown.
This example demonstrates the usage of the af_gemm function on two matrices. The \(C\) af_array handle is initialized to zero here, so af_gemm will perform an allocation.
dim_tadims[] = {5, 3, 2};
dim_tbdims[] = {3, 5, 2};
float alpha = 1.f;
float beta = 0.f;
@ f32
32-bit floating point values
AFAPI af_err af_gemm(af_array *C, const af_mat_prop opA, const af_mat_prop opB, const void *alpha, const af_array A, const af_array B, const void *beta)
C Interface to multiply two matrices.
AFAPI af_err af_constant(af_array *arr, const double val, const unsigned ndims, const dim_t *const dims, const af_dtype type)
C Interface to generate an array with elements set to a specified value.
The following example shows how you can write to a previously allocated af_array using the af_gemm call. Here we are going to use the af_array s from the previous example and index into the first slice. Only the first slice of the original \(C\) af_array will be modified by this operation.
alpha = 1.f;
beta = 1.f;
AFAPI af_err af_index(af_array *out, const af_array in, const unsigned ndims, const af_seq *const index)
Lookup the values of input array based on sequences.
static const af_seq af_span
C-style struct to creating sequences for indexing.
A
* B
= C
[in] opA operation to perform on A before the multiplication [in] opB operation to perform on B before the multiplication [in] alpha alpha value; must be the same type as A
and B
[in] A input array on the left-hand side [in] B input array on the right-hand side [in] beta beta value; must be the same type as A
and B
C Interface to multiply two matrices.
Performs matrix multiplication on two arrays.
lhs
and the dense matrix must be rhs
.
optLhs
an only be one of AF_MAT_NONE, AF_MAT_TRANS, AF_MAT_CTRANS.
optRhs
can only be AF_MAT_NONE.
lhs
* rhs
= out
[in] lhs input array on the left-hand side [in] rhs input array on the right-hand side [in] optLhs transpose lhs
before the function is performed [in] optRhs transpose rhs
before the function is performed
C++ Interface to chain multiply three matrices.
The matrix multiplications are done in a way to reduce temporary memory.
This function is not supported in GFOR.
C++ Interface to chain multiply three matrices.
The matrix multiplications are done in a way to reduce temporary memory.
This function is not supported in GFOR.
C++ Interface to multiply two matrices.
Performs a matrix multiplication on the two input arrays after performing the operations specified in the options. The operations are done while reading the data from memory. This results in no additional memory being used for temporary buffers.
Batched matrix multiplications are supported. The supported types of batch operations for any given set of two matrices A and B are given below,
Size of Input Matrix A Size of Input Matrix B Output Matrix Size \( \{ M, K, 1, 1 \} \) \( \{ K, N, 1, 1 \} \) \( \{ M, N, 1, 1 \} \) \( \{ M, K, b2, b3 \} \) \( \{ K, N, b2, b3 \} \) \( \{ M, N, b2, b3 \} \) \( \{ M, K, 1, 1 \} \) \( \{ K, N, b2, b3 \} \) \( \{ M, N, b2, b3 \} \) \( \{ M, K, b2, b3 \} \) \( \{ K, N, 1, 1 \} \) \( \{ M, N, b2, b3 \} \)where M
, K
, N
are dimensions of the matrix and b2
, b3
indicate batch size along the respective dimension.
For the last two entries in the above table, the 2D matrix is broadcasted to match the dimensions of 3D/4D array. This broadcast doesn't involve any additional memory allocations either on host or device.
optLhs
and optRhs
can only be one of AF_MAT_NONE, AF_MAT_TRANS, AF_MAT_CTRANS.
This function is not supported in GFOR.
lhs
and the dense matrix must be rhs
.
optLhs
an only be one of AF_MAT_NONE, AF_MAT_TRANS, AF_MAT_CTRANS.
optRhs
can only be AF_MAT_NONE.
lhs
* rhs
C++ Interface to multiply two matrices.
The second matrix will be transposed.
Performs a matrix multiplication on the two input arrays after performing the operations specified in the options. The operations are done while reading the data from memory. This results in no additional memory being used for temporary buffers.
Batched matrix multiplications are supported. The supported types of batch operations for any given set of two matrices A and B are given below,
Size of Input Matrix A Size of Input Matrix B Output Matrix Size \( \{ M, K, 1, 1 \} \) \( \{ K, N, 1, 1 \} \) \( \{ M, N, 1, 1 \} \) \( \{ M, K, b2, b3 \} \) \( \{ K, N, b2, b3 \} \) \( \{ M, N, b2, b3 \} \) \( \{ M, K, 1, 1 \} \) \( \{ K, N, b2, b3 \} \) \( \{ M, N, b2, b3 \} \) \( \{ M, K, b2, b3 \} \) \( \{ K, N, 1, 1 \} \) \( \{ M, N, b2, b3 \} \)where M
, K
, N
are dimensions of the matrix and b2
, b3
indicate batch size along the respective dimension.
For the last two entries in the above table, the 2D matrix is broadcasted to match the dimensions of 3D/4D array. This broadcast doesn't involve any additional memory allocations either on host or device.
This function is not supported in GFOR.
lhs
* transpose(rhs
)
C++ Interface to multiply two matrices.
The first matrix will be transposed.
Performs a matrix multiplication on the two input arrays after performing the operations specified in the options. The operations are done while reading the data from memory. This results in no additional memory being used for temporary buffers.
Batched matrix multiplications are supported. The supported types of batch operations for any given set of two matrices A and B are given below,
Size of Input Matrix A Size of Input Matrix B Output Matrix Size \( \{ M, K, 1, 1 \} \) \( \{ K, N, 1, 1 \} \) \( \{ M, N, 1, 1 \} \) \( \{ M, K, b2, b3 \} \) \( \{ K, N, b2, b3 \} \) \( \{ M, N, b2, b3 \} \) \( \{ M, K, 1, 1 \} \) \( \{ K, N, b2, b3 \} \) \( \{ M, N, b2, b3 \} \) \( \{ M, K, b2, b3 \} \) \( \{ K, N, 1, 1 \} \) \( \{ M, N, b2, b3 \} \)where M
, K
, N
are dimensions of the matrix and b2
, b3
indicate batch size along the respective dimension.
For the last two entries in the above table, the 2D matrix is broadcasted to match the dimensions of 3D/4D array. This broadcast doesn't involve any additional memory allocations either on host or device.
This function is not supported in GFOR.
lhs
) * rhs
C++ Interface to multiply two matrices.
Both matrices will be transposed.
Performs a matrix multiplication on the two input arrays after performing the operations specified in the options. The operations are done while reading the data from memory. This results in no additional memory being used for temporary buffers.
Batched matrix multiplications are supported. The supported types of batch operations for any given set of two matrices A and B are given below,
Size of Input Matrix A Size of Input Matrix B Output Matrix Size \( \{ M, K, 1, 1 \} \) \( \{ K, N, 1, 1 \} \) \( \{ M, N, 1, 1 \} \) \( \{ M, K, b2, b3 \} \) \( \{ K, N, b2, b3 \} \) \( \{ M, N, b2, b3 \} \) \( \{ M, K, 1, 1 \} \) \( \{ K, N, b2, b3 \} \) \( \{ M, N, b2, b3 \} \) \( \{ M, K, b2, b3 \} \) \( \{ K, N, 1, 1 \} \) \( \{ M, N, b2, b3 \} \)where M
, K
, N
are dimensions of the matrix and b2
, b3
indicate batch size along the respective dimension.
For the last two entries in the above table, the 2D matrix is broadcasted to match the dimensions of 3D/4D array. This broadcast doesn't involve any additional memory allocations either on host or device.
This function is not supported in GFOR.
lhs
) * transpose(rhs
)
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4