Various useful functions
Functionscheck_params: check whether some parameters are missing
Turn seed into a np.random.RandomState instance
seed (None | int | instance of RandomState) – If seed is None, return the RandomState singleton used by np.random. If seed is an int, return a new RandomState instance seeded with seed. If seed is already a RandomState instance, return it. Otherwise raise ValueError.
Remove all components with zeros weights in \(\mathbf{a}\) and \(\mathbf{b}\)
Apply normalization to the loss matrix
C (ndarray, shape (n1, n2)) – The cost matrix to normalize.
norm (str) – Type of normalization from ‘median’, ‘max’, ‘log’, ‘loglog’. Any other value do not normalize.
C – The input cost matrix normalized according to given norm.
ndarray, shape (n1, n2)
Compute distance between samples in \(\mathbf{x_1}\) and \(\mathbf{x_2}\)
Note
This function is backend-compatible and will work on arrays from all compatible backends.
x1 (array-like, shape (n1,d)) – matrix with n1 samples of size d
x2 (array-like, shape (n2,d), optional) – matrix with n2 samples of size d (if None then \(\mathbf{x_2} = \mathbf{x_1}\))
metric (str | callable, optional) – ‘sqeuclidean’ or ‘euclidean’ on all backends. On numpy the function also accepts from the scipy.spatial.distance.cdist function : ‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘cityblock’, ‘correlation’, ‘cosine’, ‘dice’, ‘euclidean’, ‘hamming’, ‘jaccard’, ‘kulczynski1’, ‘mahalanobis’, ‘matching’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘wminkowski’, ‘yule’.
p (float, optional) – p-norm for the Minkowski and the Weighted Minkowski metrics. Default value is 2.
w (array-like, rank 1) – Weights for the weighted metrics.
M – distance matrix computed with given metric
array-like, shape (n1, n2)
Compute standard cost matrices of size (n, n) for OT problems
ot.utils.dist0
dots function for multiple matrix multiply
Considering the rows of \(\mathbf{X}\) (and \(\mathbf{Y} = \mathbf{X}\)) as vectors, compute the distance matrix between each pair of vectors.
Note
This function is backend-compatible and will work on arrays from all compatible backends.
X (array-like, shape (n_samples_1, n_features))
Y (array-like, shape (n_samples_2, n_features))
squared (boolean, optional) – Return squared Euclidean distances.
distances
array-like, shape (n_samples_1, n_samples_2)
For \(x\in S^1 \subset \mathbb{R}^2\), returns the coordinates in turn (in [0,1[).
\[u = \frac{\pi + \mathrm{atan2}(-x_2,-x_1)}{2\pi}\]
x (ndarray, shape (n, 2)) – Samples on the circle with ambient coordinates
x_t – Coordinates on [0,1[
ndarray, shape (n,)
Examples
>>> u = np.array([[0.2,0.5,0.8]]) * (2 * np.pi) >>> x1, y1 = np.cos(u), np.sin(u) >>> x = np.concatenate([x1, y1]).T >>> get_coordinate_circle(x) array([0.2, 0.5, 0.8])
Get a low rank LazyTensor T=Q@R^T or T=Q@diag(d)@R^T
Q (ndarray, shape (n, r)) – First factor of the lowrank tensor
R (ndarray, shape (m, r)) – Second factor of the lowrank tensor
d (ndarray, shape (r,), optional) – Diagonal of the lowrank tensor
nx (Backend, optional) – Backend to use for the reduction
T – Lowrank tensor T=Q@R^T or T=Q@diag(d)@R^T
Extract a pair of parameters from a given parameter Used in unbalanced OT and COOT solvers to handle marginal regularization and entropic regularization.
parameter (float or indexable object)
nx (backend object)
param_1 (float)
param_2 (float)
Tests element-wise for finiteness in all arguments.
Compute kernel matrix
Transform labels to start at a given value
y – The input vector of labels normalized according to given start value.
array-like, shape (n1, )
Transforms (n_samples,) vector of labels into a (n_samples, n_labels) matrix of masks.
y (array-like, shape (n_samples, )) – The vector of labels.
type_as (array_like) – Array of the same type of the expected output.
nx (Backend, optional) – Backend to perform computations on. If omitted, the backend defaults to that of y.
masks – The (n_samples, n_labels) matrix of label masks.
array-like, shape (n_samples, n_labels)
Compute Laplacian matrix
Convert a list if in numpy format
parallel map for multiprocessing. The function has been deprecated and only performs a regular map.
Project a symmetric matrix onto the space of symmetric matrices with eigenvalues larger or equal to vmin.
S (array_like (n, d, d) or (d, d)) – The input symmetric matrix or matrices.
nx (module, optional) – The numerical backend module to use. If not provided, the backend will be fetched from the input matrix S.
vmin (float, optional) – The minimum value for the eigenvalues. Eigenvalues below this value will be clipped to vmin.
note: (..) – This function is backend-compatible and will work on arrays: from all compatible backends.
P – The projected symmetric positive definite matrix.
ndarray (n, d, d) or (d, d)
ot.utils.proj_SDP
Compute the closest point (orthogonal projection) on the generalized (n-1)-simplex of a vector \(\mathbf{v}\) wrt. to the Euclidean distance, thus solving:
\[ \begin{align}\begin{aligned}\mathcal{P}(w) \in \mathop{\arg \min}_\gamma \| \gamma - \mathbf{v} \|_2\\s.t. \ \gamma^T \mathbf{1} = z\\ \gamma \geq 0\end{aligned}\end{align} \]
If \(\mathbf{v}\) is a 2d array, compute all the projections wrt. axis 0
Note
This function is backend-compatible and will work on arrays from all compatible backends.
v ({array-like}, shape (n, d))
z (int, optional) – ‘size’ of the simplex (each vectors sum to z, 1 by default)
h – Array of projections on the simplex
ndarray, shape (n, d)
ot.utils.proj_simplex
Projection of \(\mathbf{V}\) onto the simplex with cardinality constraint (maximum number of non-zero elements) and then scaled by z.
\[\begin{split}P\left(\mathbf{V}, max_nz, z\right) = \mathop{\arg \min}_{\substack{\mathbf{y} >= 0 \\ \sum_i \mathbf{y}_i = z} \\ ||p||_0 \le \text{max_nz}} \quad \|\mathbf{y} - \mathbf{V}\|^2\end{split}\]
V (1-dim or 2-dim ndarray)
z (float or array) – If array, len(z) must be compatible with \(\mathbf{V}\)
axis (None or int) –
axis=None: project \(\mathbf{V}\) by \(P(\mathbf{V}.\mathrm{ravel}(), max_nz, z)\)
axis=1: project each \(\mathbf{V}_i\) by \(P(\mathbf{V}_i, max_nz, z_i)\)
axis=0: project each \(\mathbf{V}_{:, j}\) by \(P(\mathbf{V}_{:, j}, max_nz, z_j)\)
projection (ndarray, shape \(\mathbf{V}\).shape)
References – Sparse projections onto the simplex Anastasios Kyrillidis, Stephen Becker, Volkan Cevher and, Christoph Koch ICML 2013 https://arxiv.org/abs/1206.1529
Reduce a LazyTensor along an axis with function fun using batches.
When axis=None, reduce the LazyTensor to a scalar as a sum of fun over batches taken along dim.
Warning
This function works for tensor of any order but the reduction can be done only along the first two axis (or global). Also, in order to work, it requires that the slice of size batch_size along the axis to reduce (or axis 0 if axis=None) is can be computed and fits in memory.
a (LazyTensor) – LazyTensor to reduce
func (callable) – Function to apply to the LazyTensor
axis (int, optional) – Axis along which to reduce the LazyTensor. If None, reduce the LazyTensor to a scalar as a sum of fun over batches taken along axis 0. If 0 or 1 reduce the LazyTensor to a vector/matrix as a sum of fun over batches taken along axis.
nx (Backend, optional) – Backend to use for the reduction
batch_size (int, optional) – Size of the batches to use for the reduction (default=100)
res – Result of the reduction
array-like
Python implementation of Matlab tic() function
Python implementation of Matlab toc() function
Python implementation of Julia toc() function
Return a uniform histogram of length n (simplex).
n (int) – number of bins in the histogram
type_as (array_like) – array of the same type of the expected output (numpy/pytorch/jax)
h – histogram of length n such that \(\forall i, \mathbf{h}_i = \frac{1}{n}\)
array_like (n,)
Base class for most objects in POT
Code adapted from sklearn BaseEstimator class
Notes
All estimators should specify all the parameters that can be set at the class level in their __init__
as explicit keyword arguments (no *args
or **kwargs
).
Get parameters for this estimator.
deep (bool, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
params – Parameter names mapped to their values.
mapping of string to any
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter>
so that it’s possible to update each component of a nested object.
self
ot.utils.BaseEstimator
A lazy tensor is a tensor that is not stored in memory. Instead, it is defined by a function that computes its values on the fly from slices.
Examples
>>> import numpy as np >>> v = np.arange(5) >>> def getitem(i,j, v): ... return v[i,None]+v[None,j] >>> T = LazyTensor((5,5),getitem, v=v) >>> T[1,2] array([3]) >>> T[1,:] array([[1, 2, 3, 4, 5]]) >>> T[:] array([[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8]])
Base class for OT results.
potentials (tuple of array-like, shape (n1, n2)) – Dual potentials, i.e. Lagrange multipliers for the marginal constraints. This pair of arrays has the same shape, numerical type and properties as the input weights “a” and “b”.
value (float, array-like) – Full transport cost, including possible regularization terms and quadratic term for Gromov Wasserstein solutions.
value_linear (float, array-like) – The linear part of the transport cost, i.e. the product between the transport plan and the cost.
value_quad (float, array-like) – The quadratic part of the transport cost for Gromov-Wasserstein solutions.
plan (array-like, shape (n1, n2)) – Transport plan, encoded as a dense array.
log (dict) – Dictionary containing potential information about the solver.
backend (Backend) – Backend used to compute the results.
sparse_plan (array-like, shape (n1, n2)) – Transport plan, encoded as a sparse array.
lazy_plan (LazyTensor) – Transport plan, encoded as a symbolic POT or KeOps LazyTensor.
batch_size (int) – Batch size used to compute the results/marginals for LazyTensor.
Dual potentials, i.e. Lagrange multipliers for the marginal constraints. This pair of arrays has the same shape, numerical type and properties as the input weights “a” and “b”.
tuple of array-like, shape (n1, n2)
First dual potential, associated to the “source” measure “a”.
array-like, shape (n1,)
Second dual potential, associated to the “target” measure “b”.
array-like, shape (n2,)
Full transport cost, including possible regularization terms and quadratic term for Gromov Wasserstein solutions.
float, array-like
The linear part of the transport cost, i.e. the product between the transport plan and the cost.
float, array-like
The quadratic part of the transport cost for Gromov-Wasserstein solutions.
float, array-like
Transport plan, encoded as a dense array.
array-like, shape (n1, n2)
Transport plan, encoded as a sparse array.
array-like, shape (n1, n2)
Transport plan, encoded as a symbolic POT or KeOps LazyTensor.
Marginals of the transport plan: should be very close to “a” and “b” for balanced OT.
tuple of array-like, shape (n1,), (n2,)
Marginal of the transport plan for the “source” measure “a”.
array-like, shape (n1,)
Marginal of the transport plan for the “target” measure “b”.
array-like, shape (n2,)
Displacement vectors from the first to the second measure.
Displacement vectors from the second to the first measure.
Appropriate citation(s) for this result, in plain text and BibTex formats.
Transport plan, encoded as a symbolic KeOps LazyTensor.
Dictionary containing potential information about the solver.
First marginal of the transport plan, with the same shape as “a”.
Second marginal of the transport plan, with the same shape as “b”.
should be very close to “a” and “b” for balanced OT.
Marginals of the transport plan
Transport plan, encoded as a dense array.
First dual potential, associated to the “source” measure “a”.
Second dual potential, associated to the “target” measure “b”.
Dual potentials, i.e. Lagrange multipliers for the marginal constraints.
This pair of arrays has the same shape, numerical type and properties as the input weights “a” and “b”.
Transport plan, encoded as a sparse array.
Optimization status of the solver.
Full transport cost, including possible regularization terms and quadratic term for Gromov Wasserstein solutions.
The “minimal” transport cost, i.e. the product between the transport plan and the cost.
The quadratic part of the transport cost for Gromov-Wasserstein solutions.
Decorator to mark a function or class as deprecated.
deprecated class from scikit-learn package https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/deprecation.py Issue a warning when the function is called/the class is instantiated and adds a warning to the docstring. The optional extra argument will be appended to the deprecation message and the docstring.
Note
To use this with the default value for extra, use empty parentheses:
>>> from ot.deprecation import deprecated >>> @deprecated() ... def some_function(): pass
extra (str) – To be added to the deprecation messages.
Base class for most objects in POT
Code adapted from sklearn BaseEstimator class
Notes
All estimators should specify all the parameters that can be set at the class level in their __init__
as explicit keyword arguments (no *args
or **kwargs
).
Get parameters for this estimator.
deep (bool, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
params – Parameter names mapped to their values.
mapping of string to any
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter>
so that it’s possible to update each component of a nested object.
self
A lazy tensor is a tensor that is not stored in memory. Instead, it is defined by a function that computes its values on the fly from slices.
Examples
>>> import numpy as np >>> v = np.arange(5) >>> def getitem(i,j, v): ... return v[i,None]+v[None,j] >>> T = LazyTensor((5,5),getitem, v=v) >>> T[1,2] array([3]) >>> T[1,:] array([[1, 2, 3, 4, 5]]) >>> T[:] array([[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8]])
Base class for OT results.
potentials (tuple of array-like, shape (n1, n2)) – Dual potentials, i.e. Lagrange multipliers for the marginal constraints. This pair of arrays has the same shape, numerical type and properties as the input weights “a” and “b”.
value (float, array-like) – Full transport cost, including possible regularization terms and quadratic term for Gromov Wasserstein solutions.
value_linear (float, array-like) – The linear part of the transport cost, i.e. the product between the transport plan and the cost.
value_quad (float, array-like) – The quadratic part of the transport cost for Gromov-Wasserstein solutions.
plan (array-like, shape (n1, n2)) – Transport plan, encoded as a dense array.
log (dict) – Dictionary containing potential information about the solver.
backend (Backend) – Backend used to compute the results.
sparse_plan (array-like, shape (n1, n2)) – Transport plan, encoded as a sparse array.
lazy_plan (LazyTensor) – Transport plan, encoded as a symbolic POT or KeOps LazyTensor.
batch_size (int) – Batch size used to compute the results/marginals for LazyTensor.
Dual potentials, i.e. Lagrange multipliers for the marginal constraints. This pair of arrays has the same shape, numerical type and properties as the input weights “a” and “b”.
tuple of array-like, shape (n1, n2)
First dual potential, associated to the “source” measure “a”.
array-like, shape (n1,)
Second dual potential, associated to the “target” measure “b”.
array-like, shape (n2,)
Full transport cost, including possible regularization terms and quadratic term for Gromov Wasserstein solutions.
float, array-like
The linear part of the transport cost, i.e. the product between the transport plan and the cost.
float, array-like
The quadratic part of the transport cost for Gromov-Wasserstein solutions.
float, array-like
Transport plan, encoded as a dense array.
array-like, shape (n1, n2)
Transport plan, encoded as a sparse array.
array-like, shape (n1, n2)
Transport plan, encoded as a symbolic POT or KeOps LazyTensor.
Marginals of the transport plan: should be very close to “a” and “b” for balanced OT.
tuple of array-like, shape (n1,), (n2,)
Marginal of the transport plan for the “source” measure “a”.
array-like, shape (n1,)
Marginal of the transport plan for the “target” measure “b”.
array-like, shape (n2,)
Displacement vectors from the first to the second measure.
Displacement vectors from the second to the first measure.
Appropriate citation(s) for this result, in plain text and BibTex formats.
Transport plan, encoded as a symbolic KeOps LazyTensor.
Dictionary containing potential information about the solver.
First marginal of the transport plan, with the same shape as “a”.
Second marginal of the transport plan, with the same shape as “b”.
should be very close to “a” and “b” for balanced OT.
Marginals of the transport plan
Transport plan, encoded as a dense array.
First dual potential, associated to the “source” measure “a”.
Second dual potential, associated to the “target” measure “b”.
Dual potentials, i.e. Lagrange multipliers for the marginal constraints.
This pair of arrays has the same shape, numerical type and properties as the input weights “a” and “b”.
Transport plan, encoded as a sparse array.
Optimization status of the solver.
Full transport cost, including possible regularization terms and quadratic term for Gromov Wasserstein solutions.
The “minimal” transport cost, i.e. the product between the transport plan and the cost.
The quadratic part of the transport cost for Gromov-Wasserstein solutions.
Aim at raising an Exception when a undefined parameter is called
check_params: check whether some parameters are missing
Turn seed into a np.random.RandomState instance
seed (None | int | instance of RandomState) – If seed is None, return the RandomState singleton used by np.random. If seed is an int, return a new RandomState instance seeded with seed. If seed is already a RandomState instance, return it. Otherwise raise ValueError.
Remove all components with zeros weights in \(\mathbf{a}\) and \(\mathbf{b}\)
Apply normalization to the loss matrix
C (ndarray, shape (n1, n2)) – The cost matrix to normalize.
norm (str) – Type of normalization from ‘median’, ‘max’, ‘log’, ‘loglog’. Any other value do not normalize.
C – The input cost matrix normalized according to given norm.
ndarray, shape (n1, n2)
Decorator to mark a function or class as deprecated.
deprecated class from scikit-learn package https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/deprecation.py Issue a warning when the function is called/the class is instantiated and adds a warning to the docstring. The optional extra argument will be appended to the deprecation message and the docstring.
Note
To use this with the default value for extra, use empty parentheses:
>>> from ot.deprecation import deprecated >>> @deprecated() ... def some_function(): pass
extra (str) – To be added to the deprecation messages.
Compute distance between samples in \(\mathbf{x_1}\) and \(\mathbf{x_2}\)
Note
This function is backend-compatible and will work on arrays from all compatible backends.
x1 (array-like, shape (n1,d)) – matrix with n1 samples of size d
x2 (array-like, shape (n2,d), optional) – matrix with n2 samples of size d (if None then \(\mathbf{x_2} = \mathbf{x_1}\))
metric (str | callable, optional) – ‘sqeuclidean’ or ‘euclidean’ on all backends. On numpy the function also accepts from the scipy.spatial.distance.cdist function : ‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘cityblock’, ‘correlation’, ‘cosine’, ‘dice’, ‘euclidean’, ‘hamming’, ‘jaccard’, ‘kulczynski1’, ‘mahalanobis’, ‘matching’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘wminkowski’, ‘yule’.
p (float, optional) – p-norm for the Minkowski and the Weighted Minkowski metrics. Default value is 2.
w (array-like, rank 1) – Weights for the weighted metrics.
M – distance matrix computed with given metric
array-like, shape (n1, n2)
Compute standard cost matrices of size (n, n) for OT problems
dots function for multiple matrix multiply
Considering the rows of \(\mathbf{X}\) (and \(\mathbf{Y} = \mathbf{X}\)) as vectors, compute the distance matrix between each pair of vectors.
Note
This function is backend-compatible and will work on arrays from all compatible backends.
X (array-like, shape (n_samples_1, n_features))
Y (array-like, shape (n_samples_2, n_features))
squared (boolean, optional) – Return squared Euclidean distances.
distances
array-like, shape (n_samples_1, n_samples_2)
For \(x\in S^1 \subset \mathbb{R}^2\), returns the coordinates in turn (in [0,1[).
\[u = \frac{\pi + \mathrm{atan2}(-x_2,-x_1)}{2\pi}\]
x (ndarray, shape (n, 2)) – Samples on the circle with ambient coordinates
x_t – Coordinates on [0,1[
ndarray, shape (n,)
Examples
>>> u = np.array([[0.2,0.5,0.8]]) * (2 * np.pi) >>> x1, y1 = np.cos(u), np.sin(u) >>> x = np.concatenate([x1, y1]).T >>> get_coordinate_circle(x) array([0.2, 0.5, 0.8])
Get a low rank LazyTensor T=Q@R^T or T=Q@diag(d)@R^T
Q (ndarray, shape (n, r)) – First factor of the lowrank tensor
R (ndarray, shape (m, r)) – Second factor of the lowrank tensor
d (ndarray, shape (r,), optional) – Diagonal of the lowrank tensor
nx (Backend, optional) – Backend to use for the reduction
T – Lowrank tensor T=Q@R^T or T=Q@diag(d)@R^T
Extract a pair of parameters from a given parameter Used in unbalanced OT and COOT solvers to handle marginal regularization and entropic regularization.
parameter (float or indexable object)
nx (backend object)
param_1 (float)
param_2 (float)
Tests element-wise for finiteness in all arguments.
Compute kernel matrix
Transform labels to start at a given value
y – The input vector of labels normalized according to given start value.
array-like, shape (n1, )
Transforms (n_samples,) vector of labels into a (n_samples, n_labels) matrix of masks.
y (array-like, shape (n_samples, )) – The vector of labels.
type_as (array_like) – Array of the same type of the expected output.
nx (Backend, optional) – Backend to perform computations on. If omitted, the backend defaults to that of y.
masks – The (n_samples, n_labels) matrix of label masks.
array-like, shape (n_samples, n_labels)
Compute Laplacian matrix
Convert a list if in numpy format
parallel map for multiprocessing. The function has been deprecated and only performs a regular map.
Project a symmetric matrix onto the space of symmetric matrices with eigenvalues larger or equal to vmin.
S (array_like (n, d, d) or (d, d)) – The input symmetric matrix or matrices.
nx (module, optional) – The numerical backend module to use. If not provided, the backend will be fetched from the input matrix S.
vmin (float, optional) – The minimum value for the eigenvalues. Eigenvalues below this value will be clipped to vmin.
note: (..) – This function is backend-compatible and will work on arrays: from all compatible backends.
P – The projected symmetric positive definite matrix.
ndarray (n, d, d) or (d, d)
Compute the closest point (orthogonal projection) on the generalized (n-1)-simplex of a vector \(\mathbf{v}\) wrt. to the Euclidean distance, thus solving:
\[ \begin{align}\begin{aligned}\mathcal{P}(w) \in \mathop{\arg \min}_\gamma \| \gamma - \mathbf{v} \|_2\\s.t. \ \gamma^T \mathbf{1} = z\\ \gamma \geq 0\end{aligned}\end{align} \]
If \(\mathbf{v}\) is a 2d array, compute all the projections wrt. axis 0
Note
This function is backend-compatible and will work on arrays from all compatible backends.
v ({array-like}, shape (n, d))
z (int, optional) – ‘size’ of the simplex (each vectors sum to z, 1 by default)
h – Array of projections on the simplex
ndarray, shape (n, d)
Projection of \(\mathbf{V}\) onto the simplex with cardinality constraint (maximum number of non-zero elements) and then scaled by z.
\[\begin{split}P\left(\mathbf{V}, max_nz, z\right) = \mathop{\arg \min}_{\substack{\mathbf{y} >= 0 \\ \sum_i \mathbf{y}_i = z} \\ ||p||_0 \le \text{max_nz}} \quad \|\mathbf{y} - \mathbf{V}\|^2\end{split}\]
V (1-dim or 2-dim ndarray)
z (float or array) – If array, len(z) must be compatible with \(\mathbf{V}\)
axis (None or int) –
axis=None: project \(\mathbf{V}\) by \(P(\mathbf{V}.\mathrm{ravel}(), max_nz, z)\)
axis=1: project each \(\mathbf{V}_i\) by \(P(\mathbf{V}_i, max_nz, z_i)\)
axis=0: project each \(\mathbf{V}_{:, j}\) by \(P(\mathbf{V}_{:, j}, max_nz, z_j)\)
projection (ndarray, shape \(\mathbf{V}\).shape)
References – Sparse projections onto the simplex Anastasios Kyrillidis, Stephen Becker, Volkan Cevher and, Christoph Koch ICML 2013 https://arxiv.org/abs/1206.1529
Reduce a LazyTensor along an axis with function fun using batches.
When axis=None, reduce the LazyTensor to a scalar as a sum of fun over batches taken along dim.
Warning
This function works for tensor of any order but the reduction can be done only along the first two axis (or global). Also, in order to work, it requires that the slice of size batch_size along the axis to reduce (or axis 0 if axis=None) is can be computed and fits in memory.
a (LazyTensor) – LazyTensor to reduce
func (callable) – Function to apply to the LazyTensor
axis (int, optional) – Axis along which to reduce the LazyTensor. If None, reduce the LazyTensor to a scalar as a sum of fun over batches taken along axis 0. If 0 or 1 reduce the LazyTensor to a vector/matrix as a sum of fun over batches taken along axis.
nx (Backend, optional) – Backend to use for the reduction
batch_size (int, optional) – Size of the batches to use for the reduction (default=100)
res – Result of the reduction
array-like
Python implementation of Matlab tic() function
Python implementation of Matlab toc() function
Python implementation of Julia toc() function
Return a uniform histogram of length n (simplex).
n (int) – number of bins in the histogram
type_as (array_like) – array of the same type of the expected output (numpy/pytorch/jax)
h – histogram of length n such that \(\forall i, \mathbf{h}_i = \frac{1}{n}\)
array_like (n,)
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4