The lars_path in the sufficient stats mode.
The optimization objective for the case method=’lasso’ is:
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
in the case of method=’lar’, the objective function is only known in the form of an implicit equation (see discussion in [1]).
Read more in the User Guide.
Xy = X.T @ y
.
Gram = X.T @ X
.
Equivalent size of sample.
Maximum number of iterations to perform, set to infinity for no limit.
Minimum correlation along the path. It corresponds to the regularization parameter alpha parameter in the Lasso.
Specifies the returned model. Select 'lar'
for Least Angle Regression, 'lasso'
for the Lasso.
If False
, X
is overwritten.
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol
parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
If False
, Gram
is overwritten.
Controls output verbosity.
If return_path==True
returns the entire path, else returns only the last point of the path.
Whether to return the number of iterations.
Restrict coefficients to be >= 0. This option is only allowed with method ‘lasso’. Note that the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min()
when fit_path=True
) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent lasso_path function.
Maximum of covariances (in absolute value) at each iteration. n_alphas
is either max_iter
, n_features
or the number of nodes in the path with alpha >= alpha_min
, whichever is smaller.
Indices of active variables at the end of the path.
Coefficients along the path.
Number of iterations run. Returned only if return_n_iter
is set to True.
References
Examples
>>> from sklearn.linear_model import lars_path_gram >>> from sklearn.datasets import make_regression >>> X, y, true_coef = make_regression( ... n_samples=100, n_features=5, n_informative=2, coef=True, random_state=0 ... ) >>> true_coef array([ 0. , 0. , 0. , 97.9, 45.7]) >>> alphas, _, estimated_coef = lars_path_gram(X.T @ y, X.T @ X, n_samples=100) >>> alphas.shape (3,) >>> estimated_coef array([[ 0. , 0. , 0. ], [ 0. , 0. , 0. ], [ 0. , 0. , 0. ], [ 0. , 46.96, 97.99], [ 0. , 0. , 45.70]])
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4