UMAP has only two classes, UMAP
, and ParametricUMAP
, which inherits from it.
Uniform Manifold Approximation and Projection
Finds a low dimensional embedding of the data that approximates an underlying manifold.
The size of local neighborhood (in terms of number of neighboring sample points) used for manifold approximation. Larger values result in more global views of the manifold, while smaller values result in more local data being preserved. In general values should be in the range 2 to 100.
The dimension of the space to embed into. This defaults to 2 to provide easy visualization, but can reasonably be set to any integer value in the range 2 to 100.
The metric to use to compute distances in high dimensional space. If a string is passed it must match a valid predefined metric. If a general metric is required a function that takes two 1d arrays and returns a float can be provided. For performance purposes it is required that this be a numba jit’d function. Valid string metrics include:
euclidean
manhattan
chebyshev
minkowski
canberra
braycurtis
mahalanobis
wminkowski
seuclidean
cosine
correlation
haversine
hamming
jaccard
dice
russelrao
kulsinski
ll_dirichlet
hellinger
rogerstanimoto
sokalmichener
sokalsneath
yule
Metrics that take arguments (such as minkowski, mahalanobis etc.) can have arguments passed via the metric_kwds dictionary. At this time care must be taken and dictionary elements must be ordered appropriately; this will hopefully be fixed in the future.
The number of training epochs to be used in optimizing the low dimensional embedding. Larger values result in more accurate embeddings. If None is specified a value will be selected based on the size of the input dataset (200 for large datasets, 500 for small).
The initial learning rate for the embedding optimization.
How to initialize the low dimensional embedding. Options are:
‘spectral’: use a spectral embedding of the fuzzy 1-skeleton
‘random’: assign initial embedding positions at random.
- ‘pca’: use the first n_components from PCA applied to the
input data.
- ‘tswspectral’: use a spectral embedding of the fuzzy
1-skeleton, using a truncated singular value decomposition to “warm” up the eigensolver. This is intended as an alternative to the ‘spectral’ method, if that takes an excessively long time to complete initialization (or fails to complete).
A numpy array of initial embedding positions.
The effective minimum distance between embedded points. Smaller values will result in a more clustered/clumped embedding where nearby points on the manifold are drawn closer together, while larger values will result on a more even dispersal of points. The value should be set relative to the spread
value, which determines the scale at which embedded points will be spread out.
The effective scale of embedded points. In combination with min_dist
this determines how clustered/clumped the embedded points are.
For some datasets the nearest neighbor computation can consume a lot of memory. If you find that UMAP is failing due to memory constraints consider setting this option to True. This approach is more computationally expensive, but avoids excessive memory use.
Interpolate between (fuzzy) union and intersection as the set operation used to combine local fuzzy simplicial sets to obtain a global fuzzy simplicial sets. Both fuzzy set operations use the product t-norm. The value of this parameter should be between 0.0 and 1.0; a value of 1.0 will use a pure fuzzy union, while 0.0 will use a pure fuzzy intersection.
The local connectivity required – i.e. the number of nearest neighbors that should be assumed to be connected at a local level. The higher this value the more connected the manifold becomes locally. In practice this should be not more than the local intrinsic dimension of the manifold.
Weighting applied to negative samples in low dimensional embedding optimization. Values higher than one will result in greater weight being given to negative samples.
The number of negative samples to select per positive sample in the optimization process. Increasing this value will result in greater repulsive force being applied, greater optimization cost, but slightly more accuracy.
For transform operations (embedding new points using a trained model this will control how aggressively to search for nearest neighbors. Larger values will result in slower performance but more accurate nearest neighbor evaluation.
More specific parameters controlling the embedding. If None these values are set automatically as determined by min_dist
and spread
.
More specific parameters controlling the embedding. If None these values are set automatically as determined by min_dist
and spread
.
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
Arguments to pass on to the metric, such as the p
value for Minkowski distance. If None then no arguments are passed on.
Whether to use an angular random projection forest to initialise the approximate nearest neighbor search. This can be faster, but is mostly only useful for a metric that uses an angular style distance such as cosine, correlation etc. In the case of those metrics angular forests will be chosen automatically.
The number of nearest neighbors to use to construct the target simplicial set. If set to -1 use the n_neighbors
value.
The metric used to measure distance for a target array is using supervised dimension reduction. By default this is ‘categorical’ which will measure distance in terms of whether categories match or are different. Furthermore, if semi-supervised is required target values of -1 will be trated as unlabelled under the ‘categorical’ metric. If the target array takes continuous values (e.g. for a regression problem) then metric of ‘l1’ or ‘l2’ is probably more appropriate.
Keyword argument to pass to the target metric when performing supervised dimension reduction. If None then no arguments are passed on.
weighting factor between data topology and target topology. A value of 0.0 weights predominantly on data, a value of 1.0 places a strong emphasis on target. The default of 0.5 balances the weighting equally between data and target.
Random seed used for the stochastic aspects of the transform operation. This ensures consistency in transform operations.
Controls verbosity of logging.
Key word arguments to be used by the tqdm progress bar.
Controls if the rows of your data should be uniqued before being embedded. If you have more duplicates than you have n_neighbors
you can have the identical data points lying in different regions of your space. It also violates the definition of a metric. For to map from internal structures back to your data use the variable _unique_inverse_.
Specifies whether the density-augmented objective of densMAP should be used for optimization. Turning on this option generates an embedding where the local densities are encouraged to be correlated with those in the original space. Parameters below with the prefix ‘dens’ further control the behavior of this extension.
Controls the regularization weight of the density correlation term in densMAP. Higher values prioritize density preservation over the UMAP objective, and vice versa for values closer to zero. Setting this parameter to zero is equivalent to running the original UMAP algorithm.
Controls the fraction of epochs (between 0 and 1) where the density-augmented objective is used in densMAP. The first (1 - dens_frac) fraction of epochs optimize the original UMAP objective before introducing the density correlation term.
A small constant added to the variance of local radii in the embedding when calculating the density correlation objective to prevent numerical instability from dividing by a small number
Determines whether the local radii of the final embedding (an inverse measure of local density) are computed and returned in addition to the embedding. If set to True, local radii of the original data are also included in the output for comparison; the output is a tuple (embedding, original local radii, embedding local radii). This option can also be used when densmap=False to calculate the densities for UMAP embeddings.
Disconnect any vertices of distance greater than or equal to disconnection_distance when approximating the manifold via our k-nn graph. This is particularly useful in the case that you have a bounded metric. The UMAP assumption that we have a connected manifold can be problematic when you have points that are maximally different from all the rest of your data. The connected manifold assumption will make such points have perfect similarity to a random set of other points. Too many such points will artificially connect your space.
If the k-nearest neighbors of each point has already been calculated you can pass them in here to save computation time. The number of nearest neighbors in the precomputed_knn must be greater or equal to the n_neighbors parameter. This should be a tuple containing the output of the nearest_neighbors() function or attributes from a previously fit UMAP object; (knn_indices, knn_dists, knn_search_index). If you wish to use k-nearest neighbors data calculated by another package then provide a tuple of the form (knn_indices, knn_dists). The contents of the tuple should be two numpy arrays of shape (N, n_neighbors) where N is the number of items in the input data. The first array should be the integer indices of the nearest neighbors, and the second array should be the corresponding distances. The nearest neighbor of each item should be itself, e.g. the nearest neighbor of item 0 should be 0, the nearest neighbor of item 1 is 1 and so on. Please note that you will not be able to transform new data in this case.
Fit X into an embedded space.
Optionally use y for supervised dimension reduction.
If the metric is ‘precomputed’ X must be a square distance matrix. Otherwise it contains a sample per row. If the method is ‘exact’, X may be a sparse matrix of type ‘csr’, ‘csc’ or ‘coo’.
A target array for supervised dimension reduction. How this is handled is determined by parameters UMAP was instantiated with. The relevant attributes are target_metric
and target_metric_kwds
.
False: accepts np.inf, np.nan, pd.NA in array.
‘allow-nan’: accepts only np.nan and pd.NA values in array. Values cannot be infinite.
Any additional keyword arguments are passed to _fit_embed_data.
Fit X into an embedded space and return that transformed output.
If the metric is ‘precomputed’ X must be a square distance matrix. Otherwise it contains a sample per row.
A target array for supervised dimension reduction. How this is handled is determined by parameters UMAP was instantiated with. The relevant attributes are target_metric
and target_metric_kwds
.
False: accepts np.inf, np.nan, pd.NA in array.
‘allow-nan’: accepts only np.nan and pd.NA values in array. Values cannot be infinite.
Embedding of the training data in low-dimensional space.
output_dens
flag is set,
Local radii of data points in the original data space (log-transformed).
Local radii of data points in the embedding (log-transformed).
Transform X in the existing embedded space back into the input data space and return that transformed output.
New points to be inverse transformed.
Generated data points new data in data space.
Request metadata passed to the fit
method.
Note that this method is only relevant if enable_metadata_routing=True
(see sklearn.set_config()
). Please see User Guide on how the routing mechanism works.
The options for each parameter are:
True
: metadata is requested, and passed to fit
if provided. The request is ignored if metadata is not provided.
False
: metadata is not requested and the meta-estimator will not pass it to fit
.
None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.
Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline
. Otherwise it has no effect.
Metadata routing for ensure_all_finite
parameter in fit
.
The updated object.
Request metadata passed to the transform
method.
Note that this method is only relevant if enable_metadata_routing=True
(see sklearn.set_config()
). Please see User Guide on how the routing mechanism works.
The options for each parameter are:
True
: metadata is requested, and passed to transform
if provided. The request is ignored if metadata is not provided.
False
: metadata is not requested and the meta-estimator will not pass it to transform
.
None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.
Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline
. Otherwise it has no effect.
Metadata routing for ensure_all_finite
parameter in transform
.
The updated object.
Transform X into the existing embedded space and return that transformed output.
New data to be transformed.
False: accepts np.inf, np.nan, pd.NA in array.
‘allow-nan’: accepts only np.nan and pd.NA values in array. Values cannot be infinite.
Embedding of the new data in low-dimensional space.
A number of internal functions can also be accessed separately for more fine tuned work.
Useful FunctionsConstruct the membership strength data for the 1-skeleton of each local fuzzy simplicial set – this is formed as a sparse matrix where each row is a local fuzzy simplicial set, with a membership strength for the 1-simplex to each other data point.
The indices on the n_neighbors
closest points in the dataset.
The distances to the n_neighbors
closest points in the dataset.
The normalization factor derived from the metric tensor approximation.
The local connectivity adjustment.
Whether to return the pairwise distance associated with each edge.
Does the nearest neighbour set represent a bipartite graph? That is, are the nearest neighbour indices from the same point set as the row indices?
Row data for the resulting sparse matrix (coo format)
Column data for the resulting sparse matrix (coo format)
Entries for the resulting sparse matrix (coo format)
Distance associated with each entry in the resulting sparse matrix
Combine a fuzzy simplicial set with another fuzzy simplicial set generated from discrete metric data using discrete distances. The target data is assumed to be categorical label data (a vector of labels), and this will update the fuzzy simplicial set to respect that label data.
TODO: optional category cardinality based weighting of distance
The input fuzzy simplicial set.
The categorical labels to use in the intersection.
The distance an unknown label (-1) is assumed to be from any point.
The distance between unmatched labels.
If not None, then use this metric to determine the distance between values.
If using a custom metric scale the distance values by this value – this controls the weighting of the intersection. Larger values weight more toward target.
The resulting intersected fuzzy simplicial set.
Under the assumption of categorical distance for the intersecting simplicial set perform a fast intersection.
An array of the row of each non-zero in the sparse matrix representation.
An array of the column of each non-zero in the sparse matrix representation.
An array of the value of each non-zero in the sparse matrix representation.
The categorical labels to use in the intersection.
The distance an unknown label (-1) is assumed to be from any point.
The distance between unmatched labels.
Under the assumption of categorical distance for the intersecting simplicial set perform a fast intersection.
An array of the row of each non-zero in the sparse matrix representation.
An array of the column of each non-zero in the sparse matrix representation.
An array of the values of each non-zero in the sparse matrix representation.
The vectors of categorical labels to use in the intersection.
The function used to calculate distance over the target array.
A scaling to apply to the metric.
Fit a, b params for the differentiable curve used in lower dimensional fuzzy simplicial complex construction. We want the smooth curve (from a pre-defined family with simple gradient) that best matches an offset exponential decay.
Given a set of data X, a neighborhood size, and a measure of distance compute the fuzzy simplicial set (here represented as a fuzzy graph in the form of a sparse matrix) associated to the data. This is done by locally approximating geodesic distance at each point, creating a fuzzy simplicial set for each such point, and then combining all the local fuzzy simplicial sets into a global one via a fuzzy union.
The data to be modelled as a fuzzy simplicial set.
The number of neighbors to use to approximate geodesic distance. Larger numbers induce more global estimates of the manifold that can miss finer detail, while smaller values will focus on fine manifold structure to the detriment of the larger picture.
A state capable being used as a numpy random state.
The metric to use to compute distances in high dimensional space. If a string is passed it must match a valid predefined metric. If a general metric is required a function that takes two 1d arrays and returns a float can be provided. For performance purposes it is required that this be a numba jit’d function. Valid string metrics include:
euclidean (or l2)
manhattan (or l1)
cityblock
braycurtis
canberra
chebyshev
correlation
cosine
dice
hamming
jaccard
kulsinski
ll_dirichlet
mahalanobis
matching
minkowski
rogerstanimoto
russellrao
seuclidean
sokalmichener
sokalsneath
sqeuclidean
yule
wminkowski
Metrics that take arguments (such as minkowski, mahalanobis etc.) can have arguments passed via the metric_kwds dictionary. At this time care must be taken and dictionary elements must be ordered appropriately; this will hopefully be fixed in the future.
Arguments to pass on to the metric, such as the p
value for Minkowski distance.
If the k-nearest neighbors of each point has already been calculated you can pass them in here to save computation time. This should be an array with the indices of the k-nearest neighbors as a row for each data point.
If the k-nearest neighbors of each point has already been calculated you can pass them in here to save computation time. This should be an array with the distances of the k-nearest neighbors as a row for each data point.
Whether to use angular/cosine distance for the random projection forest for seeding NN-descent to determine approximate nearest neighbors.
Interpolate between (fuzzy) union and intersection as the set operation used to combine local fuzzy simplicial sets to obtain a global fuzzy simplicial sets. Both fuzzy set operations use the product t-norm. The value of this parameter should be between 0.0 and 1.0; a value of 1.0 will use a pure fuzzy union, while 0.0 will use a pure fuzzy intersection.
The local connectivity required – i.e. the number of nearest neighbors that should be assumed to be connected at a local level. The higher this value the more connected the manifold becomes locally. In practice this should be not more than the local intrinsic dimension of the manifold.
Whether to report information on the current progress of the algorithm.
Whether to return the pairwise distance associated with each edge.
A fuzzy simplicial set represented as a sparse matrix. The (i, j) entry of the matrix represents the membership strength of the 1-simplex between the ith and jth sample points.
Given a bipartite graph representing the 1-simplices and strengths between the new points and the original data set along with an embedding of the original points initialize the positions of new points relative to the strengths (of their neighbors in the source data).
If a point is in our original data set it embeds at the original points coordinates. If a point has no neighbours in our original dataset it embeds as the np.nan vector. Otherwise a point is the weighted average of it’s neighbours embedding locations.
A matrix indicating the 1-simplices and their associated strengths. These strengths should be values between zero and one and not normalized. One indicating that the new point was identical to one of our original points.
The original embedding of the source data.
An initial embedding of the new sample points.
Given indices and weights and an original embeddings initialize the positions of new points relative to the indices and weights (of their neighbors in the source data).
The indices of the neighbors of each new sample
The membership strengths of associated 1-simplices for each of the new samples.
The original embedding of the source data.
An initial embedding of the new sample points.
Given a set of weights and number of epochs generate the number of epochs per sample for each weight.
The weights of how much we wish to sample each 1-simplex.
The total number of epochs we want to train for.
Compute the n_neighbors
nearest points for each data point in X
under metric
. This may be exact, but more likely is approximated via nearest neighbor descent.
The input data to compute the k-neighbor graph of.
The number of nearest neighbors to compute for each sample in X
.
The metric to use for the computation.
Any arguments to pass to the metric computation function.
Whether to use angular rp trees in NN approximation.
The random state to use for approximate NN computations.
Whether to pursue lower memory NNdescent.
Whether to print status data during the computation.
The indices on the n_neighbors
closest points in the dataset.
The distances to the n_neighbors
closest points in the dataset.
The random projection forest used for searching (if used, None otherwise).
A simple wrapper function to avoid large amounts of code repetition.
Reset the local connectivity requirement – each data sample should have complete confidence in at least one 1-simplex in the simplicial set. We can enforce this by locally rescaling confidences, and then remerging the different local simplicial sets together.
The simplicial set for which to recalculate with respect to local connectivity.
The recalculated simplicial set, now with the local connectivity assumption restored.
Perform a fuzzy simplicial set embedding, using a specified initialisation method and then minimizing the fuzzy set cross entropy between the 1-skeletons of the high and low dimensional fuzzy simplicial sets.
The source data to be embedded by UMAP.
The 1-skeleton of the high dimensional fuzzy simplicial set as represented by a graph for which we require a sparse matrix for the (weighted) adjacency matrix.
The dimensionality of the euclidean space into which to embed the data.
Initial learning rate for the SGD.
Parameter of differentiable approximation of right adjoint functor
Parameter of differentiable approximation of right adjoint functor
Weight to apply to negative samples.
The number of negative samples to select per positive sample in the optimization process. Increasing this value will result in greater repulsive force being applied, greater optimization cost, but slightly more accuracy.
The number of training epochs to be used in optimizing the low dimensional embedding. Larger values result in more accurate embeddings. If 0 is specified a value will be selected based on the size of the input dataset (200 for large datasets, 500 for small). If a list of int is specified, then the intermediate embeddings at the different epochs specified in that list are returned in aux_data["embedding_list"]
.
How to initialize the low dimensional embedding. Options are:
‘spectral’: use a spectral embedding of the fuzzy 1-skeleton
‘random’: assign initial embedding positions at random.
‘pca’: use the first n_components from PCA applied to the input data.
A numpy array of initial embedding positions.
A state capable being used as a numpy random state.
The metric used to measure distance in high dimensional space; used if multiple connected components need to be layed out.
Key word arguments to be passed to the metric function; used if multiple connected components need to be layed out.
Whether to use the density-augmented objective function to optimize the embedding according to the densMAP algorithm.
Key word arguments to be used by the densMAP optimization.
Whether to output local radii in the original data and the embedding.
Function returning the distance between two points in embedding space and the gradient of the distance wrt the first argument.
Key word arguments to be passed to the output_metric function.
Whether to use the faster code specialised for euclidean output metrics
Whether to run the computation using numba parallel. Running in parallel is non-deterministic, and is not used if a random seed has been set, to ensure reproducibility.
Whether to report information on the current progress of the algorithm.
Key word arguments to be used by the tqdm progress bar.
The optimized of graph
into an n_components
dimensional euclidean space.
Auxiliary output returned with the embedding. When densMAP extension is turned on, this dictionary includes local radii in the original data (rad_orig
) and in the embedding (rad_emb
).
Compute a continuous version of the distance to the kth nearest neighbor. That is, this is similar to knn-distance but allows continuous k values rather than requiring an integral k. In essence we are simply computing the distance such that the cardinality of fuzzy set we generate is k.
Distances to nearest neighbors for each sample. Each row should be a sorted list of distances to a given samples nearest neighbors.
The number of nearest neighbors to approximate for.
We need to binary search for the correct distance value. This is the max number of iterations to use in such a search.
The local connectivity required – i.e. the number of nearest neighbors that should be assumed to be connected at a local level. The higher this value the more connected the manifold becomes locally. In practice this should be not more than the local intrinsic dimension of the manifold.
The target bandwidth of the kernel, larger values will produce larger return values.
The distance to kth nearest neighbor, as suitably approximated.
The distance to the 1st nearest neighbor for each point.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4