Non-Negative Matrix Factorization (NMF).
Find two non-negative matrices, i.e. matrices with all non-negative elements, (W, H) whose product approximates the non-negative matrix X. This factorization can be used for example for dimensionality reduction, source separation or topic extraction.
The objective function is:
\[ \begin{align}\begin{aligned}L(W, H) &= 0.5 * ||X - WH||_{loss}^2\\ &+ alpha\_W * l1\_ratio * n\_features * ||vec(W)||_1\\ &+ alpha\_H * l1\_ratio * n\_samples * ||vec(H)||_1\\ &+ 0.5 * alpha\_W * (1 - l1\_ratio) * n\_features * ||W||_{Fro}^2\\ &+ 0.5 * alpha\_H * (1 - l1\_ratio) * n\_samples * ||H||_{Fro}^2,\end{aligned}\end{align} \]
where \(||A||_{Fro}^2 = \sum_{i,j} A_{ij}^2\) (Frobenius norm) and \(||vec(A)||_1 = \sum_{i,j} abs(A_{ij})\) (Elementwise L1 norm).
The generic norm \(||X - WH||_{loss}\) may represent the Frobenius norm or another supported beta-divergence loss. The choice between options is controlled by the beta_loss
parameter.
The regularization terms are scaled by n_features
for W
and by n_samples
for H
to keep their impact balanced with respect to one another and to the data fit term as independent as possible of the size n_samples
of the training set.
The objective function is minimized with an alternating minimization of W and H.
Note that the transformed data is named W and the components matrix is named H. In the NMF literature, the naming convention is usually the opposite since the data matrix X is transposed.
Read more in the User Guide.
Number of components. If None
, all features are kept. If n_components='auto'
, the number of components is automatically inferred from W or H shapes.
Changed in version 1.4: Added 'auto'
value.
Changed in version 1.6: Default value changed from None
to 'auto'
.
Method used to initialize the procedure. Valid options:
None
: ‘nndsvda’ if n_components <= min(n_samples, n_features), otherwise random.
'random'
: non-negative random matrices, scaled with: sqrt(X.mean() / n_components)
'nndsvd'
: Nonnegative Double Singular Value Decomposition (NNDSVD) initialization (better for sparseness)
'nndsvda'
: NNDSVD with zeros filled with the average of X (better when sparsity is not desired)
'nndsvdar'
NNDSVD with zeros filled with small random values (generally faster, less accurate alternative to NNDSVDa for when sparsity is not desired)
'custom'
: Use custom matrices W
and H
which must both be provided.
Changed in version 1.1: When init=None
and n_components is less than n_samples and n_features defaults to nndsvda
instead of nndsvd
.
Numerical solver to use:
‘cd’ is a Coordinate Descent solver.
‘mu’ is a Multiplicative Update solver.
Added in version 0.17: Coordinate Descent solver.
Added in version 0.19: Multiplicative Update solver.
Beta divergence to be minimized, measuring the distance between X and the dot product WH. Note that values different from ‘frobenius’ (or 2) and ‘kullback-leibler’ (or 1) lead to significantly slower fits. Note that for beta_loss <= 0 (or ‘itakura-saito’), the input matrix X cannot contain zeros. Used only in ‘mu’ solver.
Added in version 0.19.
Tolerance of the stopping condition.
Maximum number of iterations before timing out.
Used for initialisation (when init
== ‘nndsvdar’ or ‘random’), and in Coordinate Descent. Pass an int for reproducible results across multiple function calls. See Glossary.
Constant that multiplies the regularization terms of W
. Set it to zero (default) to have no regularization on W
.
Added in version 1.0.
Constant that multiplies the regularization terms of H
. Set it to zero to have no regularization on H
. If “same” (default), it takes the same value as alpha_W
.
Added in version 1.0.
The regularization mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an elementwise L2 penalty (aka Frobenius Norm). For l1_ratio = 1 it is an elementwise L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2.
Added in version 0.17: Regularization parameter l1_ratio used in the Coordinate Descent solver.
Whether to be verbose.
If true, randomize the order of coordinates in the CD solver.
Added in version 0.17: shuffle parameter used in the Coordinate Descent solver.
Factorization matrix, sometimes called ‘dictionary’.
The number of components. It is same as the n_components
parameter if it was given. Otherwise, it will be same as the number of features.
Frobenius norm of the matrix difference, or beta-divergence, between the training data X
and the reconstructed data WH
from the fitted model.
Actual number of iterations.
Number of features seen during fit.
Added in version 0.24.
n_features_in_
,)
Names of features seen during fit. Defined only when X
has feature names that are all strings.
Added in version 1.0.
See also
DictionaryLearning
Find a dictionary that sparsely encodes data.
MiniBatchSparsePCA
Mini-batch Sparse Principal Components Analysis.
PCA
Principal component analysis.
SparseCoder
Find a sparse representation of data from a fixed, precomputed dictionary.
SparsePCA
Sparse Principal Components Analysis.
TruncatedSVD
Dimensionality reduction using truncated SVD.
References
Examples
>>> import numpy as np >>> X = np.array([[1, 1], [2, 1], [3, 1.2], [4, 1], [5, 0.8], [6, 1]]) >>> from sklearn.decomposition import NMF >>> model = NMF(n_components=2, init='random', random_state=0) >>> W = model.fit_transform(X) >>> H = model.components_
Learn a NMF model for the data X.
Training vector, where n_samples
is the number of samples and n_features
is the number of features.
Not used, present for API consistency by convention.
Parameters (keyword arguments) and values passed to the fit_transform instance.
Returns the instance itself.
Learn a NMF model for the data X and returns the transformed data.
This is more efficient than calling fit followed by transform.
Training vector, where n_samples
is the number of samples and n_features
is the number of features.
Not used, present for API consistency by convention.
If init='custom'
, it is used as initial guess for the solution. If None
, uses the initialisation method specified in init
.
If init='custom'
, it is used as initial guess for the solution. If None
, uses the initialisation method specified in init
.
Transformed data.
Get output feature names for transformation.
The feature names out will prefixed by the lowercased class name. For example, if the transformer outputs 3 features, then the feature names out are: ["class_name0", "class_name1", "class_name2"]
.
Only used to validate feature names with the names seen in fit
.
Transformed feature names.
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
A MetadataRequest
encapsulating routing information.
Get parameters for this estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Transform data back to its original space.
Added in version 0.18.
Transformed data matrix.
Returns a data matrix of the original shape.
Set output container.
See Introducing the set_output API for an example on how to use the API.
Configure output of transform
and fit_transform
.
"default"
: Default output format of a transformer
"pandas"
: DataFrame output
"polars"
: Polars output
None
: Transform configuration is unchanged
Added in version 1.4: "polars"
option was added.
Estimator instance.
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as Pipeline
). The latter have parameters of the form <component>__<parameter>
so that it’s possible to update each component of a nested object.
Estimator parameters.
Estimator instance.
Transform the data X according to the fitted NMF model.
Training vector, where n_samples
is the number of samples and n_features
is the number of features.
Transformed data.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4