Multi-task Lasso model trained with L1/L2 mixed-norm as regularizer.
See glossary entry for cross-validation estimator.
The optimization objective for MultiTaskLasso is:
(1 / (2 * n_samples)) * ||Y - XW||^Fro_2 + alpha * ||W||_21
Where:
||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row.
Read more in the User Guide.
Added in version 0.15.
Length of the path. eps=1e-3
means that alpha_min / alpha_max = 1e-3
.
Number of alphas along the regularization path.
Deprecated since version 1.7: n_alphas
was deprecated in 1.7 and will be removed in 1.9. Use alphas
instead.
Values of alphas to test along the regularization path. If int, alphas
values are generated automatically. If array-like, list of alpha values to use.
Changed in version 1.7: alphas
accepts an integer value which removes the need to pass n_alphas
.
Deprecated since version 1.7: alphas=None
was deprecated in 1.7 and will be removed in 1.9, at which point the default value will be set to 100.
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
The maximum number of iterations.
The tolerance for the optimization: if the updates are smaller than tol
, the optimization code checks the dual gap for optimality and continues until it is smaller than tol
.
If True
, X will be copied; else, it may be overwritten.
Determines the cross-validation splitting strategy. Possible inputs for cv are:
None, to use the default 5-fold cross-validation,
int, to specify the number of folds.
An iterable yielding (train, test) splits as arrays of indices.
For int/None inputs, KFold
is used.
Refer User Guide for the various cross-validation strategies that can be used here.
Changed in version 0.22: cv
default value if None changed from 3-fold to 5-fold.
Amount of verbosity.
Number of CPUs to use during the cross validation. Note that this is used only if multiple values for l1_ratio are given. None
means 1 unless in a joblib.parallel_backend
context. -1
means using all processors. See Glossary for more details.
The seed of the pseudo random number generator that selects a random feature to update. Used when selection
== ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary.
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4.
Independent term in decision function.
Parameter vector (W in the cost function formula). Note that coef_
stores the transpose of W
, W.T
.
The amount of penalization chosen by cross validation.
Mean square error for the test set on each fold, varying alpha.
The grid of alphas used for fitting.
Number of iterations run by the coordinate descent solver to reach the specified tolerance for the optimal alpha.
The dual gap at the end of the optimization for the optimal alpha.
Number of features seen during fit.
Added in version 0.24.
n_features_in_
,)
Names of features seen during fit. Defined only when X
has feature names that are all strings.
Added in version 1.0.
See also
MultiTaskElasticNet
Multi-task ElasticNet model trained with L1/L2 mixed-norm as regularizer.
ElasticNetCV
Elastic net model with best model selection by cross-validation.
MultiTaskElasticNetCV
Multi-task L1/L2 ElasticNet with built-in cross-validation.
Notes
The algorithm used to fit the model is coordinate descent.
In fit
, once the best parameter alpha
is found through cross-validation, the model is fit again using the entire training set.
To avoid unnecessary memory duplication the X
and y
arguments of the fit
method should be directly passed as Fortran-contiguous numpy arrays.
Examples
>>> from sklearn.linear_model import MultiTaskLassoCV >>> from sklearn.datasets import make_regression >>> from sklearn.metrics import r2_score >>> X, y = make_regression(n_targets=2, noise=4, random_state=0) >>> reg = MultiTaskLassoCV(cv=5, random_state=0).fit(X, y) >>> r2_score(y, reg.predict(X)) 0.9994 >>> reg.alpha_ np.float64(0.5713) >>> reg.predict(X[:1,]) array([[153.7971, 94.9015]])
Fit MultiTaskLasso model with coordinate descent.
Fit is on grid of alphas and best alpha estimated by cross-validation.
Data.
Target. Will be cast to X’s dtype if necessary.
Parameters to be passed to the CV splitter.
Added in version 1.4: Only available if enable_metadata_routing=True
, which can be set by using sklearn.set_config(enable_metadata_routing=True)
. See Metadata Routing User Guide for more details.
Returns an instance of fitted model.
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
Added in version 1.4.
A MetadataRouter
encapsulating routing information.
Get parameters for this estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Compute Lasso path with coordinate descent.
The Lasso optimization function varies for mono and multi-outputs.
For mono-output tasks it is:
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
For multi-output tasks it is:
(1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21
Where:
||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row.
Read more in the User Guide.
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y
is mono-output then X
can be sparse.
Target values.
Length of the path. eps=1e-3
means that alpha_min / alpha_max = 1e-3
.
Number of alphas along the regularization path.
List of alphas where to compute the models. If None
alphas are set automatically.
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto'
let us decide. The Gram matrix can also be passed as argument.
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
If True
, X will be copied; else, it may be overwritten.
The initial values of the coefficients.
Amount of verbosity.
Whether to return the number of iterations or not.
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1
).
Keyword arguments passed to the coordinate descent solver.
The alphas along the path where models are computed.
Coefficients along the path.
The dual gaps at the end of the optimization for each alpha.
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha.
See also
lars_path
Compute Least Angle Regression or Lasso path using LARS algorithm.
Lasso
The Lasso is a linear model that estimates sparse coefficients.
LassoLars
Lasso model fit with Least Angle Regression a.k.a. Lars.
LassoCV
Lasso linear model with iterative fitting along a regularization path.
LassoLarsCV
Cross-validated Lasso using the LARS algorithm.
sklearn.decomposition.sparse_encode
Estimator that can be used to transform signals into sparse linear combination of atoms from a fixed.
Notes
For an example, see examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py.
To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array.
Note that in certain cases, the Lars solver may be significantly faster to implement this functionality. In particular, linear interpolation can be used to retrieve model coefficients between the values output by lars_path
Examples
Comparing lasso_path and lars_path with interpolation:
>>> import numpy as np >>> from sklearn.linear_model import lasso_path >>> X = np.array([[1, 2, 3.1], [2.3, 5.4, 4.3]]).T >>> y = np.array([1, 2, 3.1]) >>> # Use lasso_path to compute a coefficient path >>> _, coef_path, _ = lasso_path(X, y, alphas=[5., 1., .5]) >>> print(coef_path) [[0. 0. 0.46874778] [0.2159048 0.4425765 0.23689075]]
>>> # Now use lars_path and 1D linear interpolation to compute the >>> # same path >>> from sklearn.linear_model import lars_path >>> alphas, active, coef_path_lars = lars_path(X, y, method='lasso') >>> from scipy import interpolate >>> coef_path_continuous = interpolate.interp1d(alphas[::-1], ... coef_path_lars[:, ::-1]) >>> print(coef_path_continuous([5., 1., .5])) [[0. 0. 0.46915237] [0.2159048 0.4425765 0.23668876]]
Predict using the linear model.
Samples.
Returns predicted values.
Return coefficient of determination on test data.
The coefficient of determination, \(R^2\), is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)** 2).sum()
and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum()
. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y
, disregarding the input features, would get a \(R^2\) score of 0.0.
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted)
, where n_samples_fitted
is the number of samples used in the fitting for the estimator.
True values for X
.
Sample weights.
\(R^2\) of self.predict(X)
w.r.t. y
.
Notes
The \(R^2\) score used when calling score
on a regressor uses multioutput='uniform_average'
from version 0.23 to keep consistent with default value of r2_score
. This influences the score
method of all the multioutput regressors (except for MultiOutputRegressor
).
Request metadata passed to the fit
method.
Note that this method is only relevant if enable_metadata_routing=True
(see sklearn.set_config
). Please see User Guide on how the routing mechanism works.
The options for each parameter are:
True
: metadata is requested, and passed to fit
if provided. The request is ignored if metadata is not provided.
False
: metadata is not requested and the meta-estimator will not pass it to fit
.
None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.
Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline
. Otherwise it has no effect.
Metadata routing for sample_weight
parameter in fit
.
The updated object.
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as Pipeline
). The latter have parameters of the form <component>__<parameter>
so that it’s possible to update each component of a nested object.
Estimator parameters.
Estimator instance.
Request metadata passed to the score
method.
Note that this method is only relevant if enable_metadata_routing=True
(see sklearn.set_config
). Please see User Guide on how the routing mechanism works.
The options for each parameter are:
True
: metadata is requested, and passed to score
if provided. The request is ignored if metadata is not provided.
False
: metadata is not requested and the meta-estimator will not pass it to score
.
None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.
Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline
. Otherwise it has no effect.
Metadata routing for sample_weight
parameter in score
.
The updated object.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4