Cross-validated Lasso, using the LARS algorithm.
See glossary entry for cross-validation estimator.
The optimization objective for Lasso is:
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Read more in the User Guide.
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
Sets the verbosity amount.
Maximum number of iterations to perform.
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto'
let us decide. The Gram matrix cannot be passed as argument since we will use only subsets of X.
Determines the cross-validation splitting strategy. Possible inputs for cv are:
None, to use the default 5-fold cross-validation,
integer, to specify the number of folds.
An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, KFold
is used.
Refer User Guide for the various cross-validation strategies that can be used here.
Changed in version 0.22: cv
default value if None changed from 3-fold to 5-fold.
The maximum number of points on the path used to compute the residuals in the cross-validation.
Number of CPUs to use during the cross validation. None
means 1 unless in a joblib.parallel_backend
context. -1
means using all processors. See Glossary for more details.
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol
parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
If True, X will be copied; else, it may be overwritten.
Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients do not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min()
when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator. As a consequence using LassoLarsCV only makes sense for problems where a sparse solution is expected and/or reached.
parameter vector (w in the formulation formula)
independent term in decision function.
the varying values of the coefficients along the path
the estimated regularization parameter alpha
the different values of alpha along the path
all the values of alpha along the path for the different folds
the mean square error on left-out for each fold along the path (alpha values given by cv_alphas
)
the number of iterations run by Lars with the optimal alpha.
Indices of active variables at the end of the path.
Number of features seen during fit.
Added in version 0.24.
n_features_in_
,)
Names of features seen during fit. Defined only when X
has feature names that are all strings.
Added in version 1.0.
See also
lars_path
Compute Least Angle Regression or Lasso path using LARS algorithm.
lasso_path
Compute Lasso path with coordinate descent.
Lasso
Linear Model trained with L1 prior as regularizer (aka the Lasso).
LassoCV
Lasso linear model with iterative fitting along a regularization path.
LassoLars
Lasso model fit with Least Angle Regression a.k.a. Lars.
LassoLarsIC
Lasso model fit with Lars using BIC or AIC for model selection.
sklearn.decomposition.sparse_encode
Sparse coding.
Notes
The object solves the same problem as the LassoCV
object. However, unlike the LassoCV
, it find the relevant alphas values by itself. In general, because of this property, it will be more stable. However, it is more fragile to heavily multicollinear datasets.
It is more efficient than the LassoCV
if only a small number of features are selected compared to the total number, for instance if there are very few samples compared to the number of features.
In fit
, once the best parameter alpha
is found through cross-validation, the model is fit again using the entire training set.
Examples
>>> from sklearn.linear_model import LassoLarsCV >>> from sklearn.datasets import make_regression >>> X, y = make_regression(noise=4.0, random_state=0) >>> reg = LassoLarsCV(cv=5).fit(X, y) >>> reg.score(X, y) 0.9993 >>> reg.alpha_ np.float64(0.3972) >>> reg.predict(X[:1,]) array([-78.4831])
Fit the model using X, y as training data.
Training data.
Target values.
Parameters to be passed to the CV splitter.
Added in version 1.4: Only available if enable_metadata_routing=True
, which can be set by using sklearn.set_config(enable_metadata_routing=True)
. See Metadata Routing User Guide for more details.
Returns an instance of self.
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
Added in version 1.4.
A MetadataRouter
encapsulating routing information.
Get parameters for this estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Predict using the linear model.
Samples.
Returns predicted values.
Return coefficient of determination on test data.
The coefficient of determination, \(R^2\), is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)** 2).sum()
and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum()
. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y
, disregarding the input features, would get a \(R^2\) score of 0.0.
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted)
, where n_samples_fitted
is the number of samples used in the fitting for the estimator.
True values for X
.
Sample weights.
\(R^2\) of self.predict(X)
w.r.t. y
.
Notes
The \(R^2\) score used when calling score
on a regressor uses multioutput='uniform_average'
from version 0.23 to keep consistent with default value of r2_score
. This influences the score
method of all the multioutput regressors (except for MultiOutputRegressor
).
Configure whether metadata should be requested to be passed to the fit
method.
Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True
(seesklearn.set_config
). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.
False
: metadata is not requested and the meta-estimator will not pass it tofit
.
None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Metadata routing for Xy
parameter in fit
.
The updated object.
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as Pipeline
). The latter have parameters of the form <component>__<parameter>
so that it’s possible to update each component of a nested object.
Estimator parameters.
Estimator instance.
Configure whether metadata should be requested to be passed to the score
method.
Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True
(seesklearn.set_config
). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.
False
: metadata is not requested and the meta-estimator will not pass it toscore
.
None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Metadata routing for sample_weight
parameter in score
.
The updated object.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4