FastICA: a fast algorithm for Independent Component Analysis.
The implementation is based on [1].
Read more in the User Guide.
Number of components to use. If None is passed, all are used.
Specify which algorithm to use for FastICA.
Specify the whitening strategy to use.
If ‘arbitrary-variance’, a whitening with variance arbitrary is used.
If ‘unit-variance’, the whitening matrix is rescaled to ensure that each recovered source has unit variance.
If False, the data is already considered to be whitened, and no whitening is performed.
Changed in version 1.3: The default value of whiten
changed to ‘unit-variance’ in 1.3.
The functional form of the G function used in the approximation to neg-entropy. Could be either ‘logcosh’, ‘exp’, or ‘cube’. You can also provide your own function. It should return a tuple containing the value of the function, and of its derivative, in the point. The derivative should be averaged along its last dimension. Example:
def my_g(x): return x ** 3, (3 * x ** 2).mean(axis=-1)
Arguments to send to the functional form. If empty or None and if fun=’logcosh’, fun_args will take value {‘alpha’ : 1.0}.
Maximum number of iterations during fit.
A positive scalar giving the tolerance at which the un-mixing matrix is considered to have converged.
Initial un-mixing array. If w_init=None
, then an array of values drawn from a normal distribution is used.
The solver to use for whitening.
“svd” is more stable numerically if the problem is degenerate, and often faster when n_samples <= n_features
.
“eigh” is generally more memory efficient when n_samples >= n_features
, and can be faster when n_samples >= 50 * n_features
.
Added in version 1.2.
Used to initialize w_init
when not specified, with a normal distribution. Pass an int, for reproducible results across multiple function calls. See Glossary.
The linear operator to apply to the data to get the independent sources. This is equal to the unmixing matrix when whiten
is False, and equal to np.dot(unmixing_matrix, self.whitening_)
when whiten
is True.
The pseudo-inverse of components_
. It is the linear operator that maps independent sources to the data.
The mean over features. Only set if self.whiten
is True.
Number of features seen during fit.
Added in version 0.24.
n_features_in_
,)
Names of features seen during fit. Defined only when X
has feature names that are all strings.
Added in version 1.0.
If the algorithm is “deflation”, n_iter is the maximum number of iterations run across all components. Else they are just the number of iterations taken to converge.
Only set if whiten is ‘True’. This is the pre-whitening matrix that projects data onto the first n_components
principal components.
See also
PCA
Principal component analysis (PCA).
IncrementalPCA
Incremental principal components analysis (IPCA).
KernelPCA
Kernel Principal component analysis (KPCA).
MiniBatchSparsePCA
Mini-batch Sparse Principal Components Analysis.
SparsePCA
Sparse Principal Components Analysis (SparsePCA).
References
[1]A. Hyvarinen and E. Oja, Independent Component Analysis: Algorithms and Applications, Neural Networks, 13(4-5), 2000, pp. 411-430.
Examples
>>> from sklearn.datasets import load_digits >>> from sklearn.decomposition import FastICA >>> X, _ = load_digits(return_X_y=True) >>> transformer = FastICA(n_components=7, ... random_state=0, ... whiten='unit-variance') >>> X_transformed = transformer.fit_transform(X) >>> X_transformed.shape (1797, 7)
Fit the model to X.
Training data, where n_samples
is the number of samples and n_features
is the number of features.
Not used, present for API consistency by convention.
Returns the instance itself.
Fit the model and recover the sources from X.
Training data, where n_samples
is the number of samples and n_features
is the number of features.
Not used, present for API consistency by convention.
Estimated sources obtained by transforming the data with the estimated unmixing matrix.
Get output feature names for transformation.
The feature names out will prefixed by the lowercased class name. For example, if the transformer outputs 3 features, then the feature names out are: ["class_name0", "class_name1", "class_name2"]
.
Only used to validate feature names with the names seen in fit
.
Transformed feature names.
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
A MetadataRequest
encapsulating routing information.
Get parameters for this estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Transform the sources back to the mixed data (apply mixing matrix).
Sources, where n_samples
is the number of samples and n_components
is the number of components.
If False, data passed to fit are overwritten. Defaults to True.
Reconstructed data obtained with the mixing matrix.
Configure whether metadata should be requested to be passed to the inverse_transform
method.
Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True
(seesklearn.set_config
). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toinverse_transform
if provided. The request is ignored if metadata is not provided.
False
: metadata is not requested and the meta-estimator will not pass it toinverse_transform
.
None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Metadata routing for copy
parameter in inverse_transform
.
The updated object.
Set output container.
See Introducing the set_output API for an example on how to use the API.
Configure output of transform
and fit_transform
.
"default"
: Default output format of a transformer
"pandas"
: DataFrame output
"polars"
: Polars output
None
: Transform configuration is unchanged
Added in version 1.4: "polars"
option was added.
Estimator instance.
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as Pipeline
). The latter have parameters of the form <component>__<parameter>
so that it’s possible to update each component of a nested object.
Estimator parameters.
Estimator instance.
Configure whether metadata should be requested to be passed to the transform
method.
Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True
(seesklearn.set_config
). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed totransform
if provided. The request is ignored if metadata is not provided.
False
: metadata is not requested and the meta-estimator will not pass it totransform
.
None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Metadata routing for copy
parameter in transform
.
The updated object.
Recover the sources from X (apply the unmixing matrix).
Data to transform, where n_samples
is the number of samples and n_features
is the number of features.
If False, data passed to fit can be overwritten. Defaults to True.
Estimated sources obtained by transforming the data with the estimated unmixing matrix.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4