Linear model fitted by minimizing a regularized empirical loss with SGD.
SGD stands for Stochastic Gradient Descent: the gradient of the loss is estimated each sample at a time and the model is updated along the way with a decreasing strength schedule (aka learning rate).
The regularizer is a penalty added to the loss function that shrinks model parameters towards the zero vector using either the squared euclidean norm L2 or the absolute norm L1 or a combination of both (Elastic Net). If the parameter update crosses the 0.0 value because of the regularizer, the update is truncated to 0.0 to allow for learning sparse models and achieve online feature selection.
This implementation works with data represented as dense numpy arrays of floating point values for the features.
Read more in the User Guide.
The loss function to be used. The possible values are ‘squared_error’, ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’
The ‘squared_error’ refers to the ordinary least squares fit. ‘huber’ modifies ‘squared_error’ to focus less on getting outliers correct by switching from squared to linear loss past a distance of epsilon. ‘epsilon_insensitive’ ignores errors less than epsilon and is linear past that; this is the loss function used in SVR. ‘squared_epsilon_insensitive’ is the same but becomes squared loss past a tolerance of epsilon.
More details about the losses formulas can be found in the User Guide.
The penalty (aka regularization term) to be used. Defaults to ‘l2’ which is the standard regularizer for linear SVM models. ‘l1’ and ‘elasticnet’ might bring sparsity to the model (feature selection) not achievable with ‘l2’. No penalty is added when set to None
.
You can see a visualisation of the penalties in SGD: Penalties.
Constant that multiplies the regularization term. The higher the value, the stronger the regularization. Also used to compute the learning rate when learning_rate
is set to ‘optimal’. Values must be in the range [0.0, inf)
.
The Elastic Net mixing parameter, with 0 <= l1_ratio <= 1. l1_ratio=0 corresponds to L2 penalty, l1_ratio=1 to L1. Only used if penalty
is ‘elasticnet’. Values must be in the range [0.0, 1.0]
or can be None
if penalty
is not elasticnet
.
Changed in version 1.7: l1_ratio
can be None
when penalty
is not “elasticnet”.
Whether the intercept should be estimated or not. If False, the data is assumed to be already centered.
The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit
method, and not the partial_fit
method. Values must be in the range [1, inf)
.
Added in version 0.19.
The stopping criterion. If it is not None, training will stop when (loss > best_loss - tol) for n_iter_no_change
consecutive epochs. Convergence is checked against the training loss or the validation loss depending on the early_stopping
parameter. Values must be in the range [0.0, inf)
.
Added in version 0.19.
Whether or not the training data should be shuffled after each epoch.
The verbosity level. Values must be in the range [0, inf)
.
Epsilon in the epsilon-insensitive loss functions; only if loss
is ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’. For ‘huber’, determines the threshold at which it becomes less important to get the prediction exactly right. For epsilon-insensitive, any differences between the current prediction and the correct label are ignored if they are less than this threshold. Values must be in the range [0.0, inf)
.
Used for shuffling the data, when shuffle
is set to True
. Pass an int for reproducible output across multiple function calls. See Glossary.
The learning rate schedule:
‘constant’: eta = eta0
‘optimal’: eta = 1.0 / (alpha * (t + t0))
where t0 is chosen by a heuristic proposed by Leon Bottou.
‘invscaling’: eta = eta0 / pow(t, power_t)
‘adaptive’: eta = eta0, as long as the training keeps decreasing. Each time n_iter_no_change consecutive epochs fail to decrease the training loss by tol or fail to increase validation score by tol if early_stopping is True, the current learning rate is divided by 5.
Added in version 0.20: Added ‘adaptive’ option.
The initial learning rate for the ‘constant’, ‘invscaling’ or ‘adaptive’ schedules. The default value is 0.01. Values must be in the range [0.0, inf)
.
The exponent for inverse scaling learning rate. Values must be in the range [0.0, inf)
.
Deprecated since version 1.8: Negative values for power_t
are deprecated in version 1.8 and will raise an error in 1.10. Use values in the range [0.0, inf) instead.
Whether to use early stopping to terminate training when validation score is not improving. If set to True, it will automatically set aside a fraction of training data as validation and terminate training when validation score returned by the score
method is not improving by at least tol
for n_iter_no_change
consecutive epochs.
See Early stopping of Stochastic Gradient Descent for an example of the effects of early stopping.
Added in version 0.20: Added ‘early_stopping’ option
The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping
is True. Values must be in the range (0.0, 1.0)
.
Added in version 0.20: Added ‘validation_fraction’ option
Number of iterations with no improvement to wait before stopping fitting. Convergence is checked against the training loss or the validation loss depending on the early_stopping
parameter. Integer values must be in the range [1, max_iter)
.
Added in version 0.20: Added ‘n_iter_no_change’ option
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary.
Repeatedly calling fit or partial_fit when warm_start is True can result in a different solution than when calling fit a single time because of the way the data is shuffled. If a dynamic learning rate is used, the learning rate is adapted depending on the number of samples already seen. Calling fit
resets this counter, while partial_fit
will result in increasing the existing counter.
When set to True, computes the averaged SGD weights across all updates and stores the result in the coef_
attribute. If set to an int greater than 1, averaging will begin once the total number of samples seen reaches average
. So average=10
will begin averaging after seeing 10 samples.
Weights assigned to the features.
The intercept term.
The actual number of iterations before reaching the stopping criterion.
Number of weight updates performed during training. Same as (n_iter_ * n_samples + 1)
.
Number of features seen during fit.
Added in version 0.24.
n_features_in_
,)
Names of features seen during fit. Defined only when X
has feature names that are all strings.
Added in version 1.0.
See also
HuberRegressor
Linear regression model that is robust to outliers.
Lars
Least Angle Regression model.
Lasso
Linear Model trained with L1 prior as regularizer.
RANSACRegressor
RANSAC (RANdom SAmple Consensus) algorithm.
Ridge
Linear least squares with l2 regularization.
sklearn.svm.SVR
Epsilon-Support Vector Regression.
TheilSenRegressor
Theil-Sen Estimator robust multivariate regression model.
Examples
>>> import numpy as np >>> from sklearn.linear_model import SGDRegressor >>> from sklearn.pipeline import make_pipeline >>> from sklearn.preprocessing import StandardScaler >>> n_samples, n_features = 10, 5 >>> rng = np.random.RandomState(0) >>> y = rng.randn(n_samples) >>> X = rng.randn(n_samples, n_features) >>> # Always scale the input. The most convenient way is to use a pipeline. >>> reg = make_pipeline(StandardScaler(), ... SGDRegressor(max_iter=1000, tol=1e-3)) >>> reg.fit(X, y) Pipeline(steps=[('standardscaler', StandardScaler()), ('sgdregressor', SGDRegressor())])
Convert coefficient matrix to dense array format.
Converts the coef_
member (back) to a numpy.ndarray. This is the default format of coef_
and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op.
Fitted estimator.
Fit linear model with Stochastic Gradient Descent.
Training data.
Target values.
The initial coefficients to warm-start the optimization.
The initial intercept to warm-start the optimization.
Weights applied to individual samples (1. for unweighted).
Fitted SGDRegressor
estimator.
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
A MetadataRequest
encapsulating routing information.
Get parameters for this estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Perform one epoch of stochastic gradient descent on given samples.
Internally, this method uses max_iter = 1
. Therefore, it is not guaranteed that a minimum of the cost function is reached after calling it once. Matters such as objective convergence and early stopping should be handled by the user.
Subset of training data.
Subset of target values.
Weights applied to individual samples. If not provided, uniform weights are assumed.
Returns an instance of self.
Predict using the linear model.
Input data.
Predicted target values per element in X.
Return coefficient of determination on test data.
The coefficient of determination, \(R^2\), is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)** 2).sum()
and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum()
. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y
, disregarding the input features, would get a \(R^2\) score of 0.0.
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted)
, where n_samples_fitted
is the number of samples used in the fitting for the estimator.
True values for X
.
Sample weights.
\(R^2\) of self.predict(X)
w.r.t. y
.
Notes
The \(R^2\) score used when calling score
on a regressor uses multioutput='uniform_average'
from version 0.23 to keep consistent with default value of r2_score
. This influences the score
method of all the multioutput regressors (except for MultiOutputRegressor
).
Configure whether metadata should be requested to be passed to the fit
method.
Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True
(seesklearn.set_config
). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.
False
: metadata is not requested and the meta-estimator will not pass it tofit
.
None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Metadata routing for coef_init
parameter in fit
.
Metadata routing for intercept_init
parameter in fit
.
Metadata routing for sample_weight
parameter in fit
.
The updated object.
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as Pipeline
). The latter have parameters of the form <component>__<parameter>
so that it’s possible to update each component of a nested object.
Estimator parameters.
Estimator instance.
Configure whether metadata should be requested to be passed to the partial_fit
method.
Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True
(seesklearn.set_config
). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topartial_fit
if provided. The request is ignored if metadata is not provided.
False
: metadata is not requested and the meta-estimator will not pass it topartial_fit
.
None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Metadata routing for sample_weight
parameter in partial_fit
.
The updated object.
Configure whether metadata should be requested to be passed to the score
method.
Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True
(seesklearn.set_config
). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.
False
: metadata is not requested and the meta-estimator will not pass it toscore
.
None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Metadata routing for sample_weight
parameter in score
.
The updated object.
Convert coefficient matrix to sparse format.
Converts the coef_
member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation.
The intercept_
member is not converted.
Fitted estimator.
Notes
For non-sparse models, i.e. when there are not many zeros in coef_
, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum()
, must be more than 50% for this to provide significant benefits.
After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4