Ridge classifier with built-in cross-validation.
See glossary entry for cross-validation estimator.
By default, it performs Leave-One-Out Cross-Validation. Currently, only the n_features > n_samples case is handled efficiently.
Read more in the User Guide.
Array of alpha values to try. Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to 1 / (2C)
in other linear models such as LogisticRegression
or LinearSVC
. If using Leave-One-Out cross-validation, alphas must be strictly positive.
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
The scoring method to use for cross-validation. Options:
str: see String name scorers for options.
callable: a scorer callable object (e.g., function) with signature scorer(estimator, X, y)
. See Callable scorers for details.
None
: negative mean squared error if cv is None (i.e. when using leave-one-out cross-validation), or accuracy otherwise.
Determines the cross-validation splitting strategy. Possible inputs for cv are:
None, to use the efficient Leave-One-Out cross-validation
integer, to specify the number of folds.
An iterable yielding (train, test) splits as arrays of indices.
Refer User Guide for the various cross-validation strategies that can be used here.
Weights associated with classes in the form {class_label: weight}
. If not given, all classes are supposed to have weight one.
The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y))
.
Flag indicating if the cross-validation results corresponding to each alpha should be stored in the cv_results_
attribute (see below). This flag is only compatible with cv=None
(i.e. using Leave-One-Out Cross-Validation).
Changed in version 1.5: Parameter name changed from store_cv_values
to store_cv_results
.
Cross-validation results for each alpha (only if store_cv_results=True
and cv=None
). After fit()
has been called, this attribute will contain the mean squared errors if scoring is None
otherwise it will contain standardized per point prediction values.
Changed in version 1.5: cv_values_
changed to cv_results_
.
Coefficient of the features in the decision function.
coef_
is of shape (1, n_features) when the given problem is binary.
Independent term in decision function. Set to 0.0 if fit_intercept = False
.
Estimated regularization parameter.
Score of base estimator with best alpha.
Added in version 0.23.
classes_
ndarray of shape (n_classes,)
Classes labels.
Number of features seen during fit.
Added in version 0.24.
n_features_in_
,)
Names of features seen during fit. Defined only when X
has feature names that are all strings.
Added in version 1.0.
Notes
For multi-class classification, n_class classifiers are trained in a one-versus-all approach. Concretely, this is implemented by taking advantage of the multi-variate response support in Ridge.
Examples
>>> from sklearn.datasets import load_breast_cancer >>> from sklearn.linear_model import RidgeClassifierCV >>> X, y = load_breast_cancer(return_X_y=True) >>> clf = RidgeClassifierCV(alphas=[1e-3, 1e-2, 1e-1, 1]).fit(X, y) >>> clf.score(X, y) 0.9630...
Predict confidence scores for samples.
The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane.
The data matrix for which we want to get the confidence scores.
Confidence scores per (n_samples, n_classes)
combination. In the binary case, confidence score for self.classes_[1]
where >0 means this class would be predicted.
Fit Ridge classifier with cv.
Training vectors, where n_samples
is the number of samples and n_features
is the number of features. When using GCV, will be cast to float64 if necessary.
Target values. Will be cast to X’s dtype if necessary.
Individual weights for each sample. If given a float, every sample will have the same weight.
Parameters to be passed to the underlying scorer.
Added in version 1.5: Only available if enable_metadata_routing=True
, which can be set by using sklearn.set_config(enable_metadata_routing=True)
. See Metadata Routing User Guide for more details.
Fitted estimator.
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
Added in version 1.5.
A MetadataRouter
encapsulating routing information.
Get parameters for this estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Predict class labels for samples in X
.
The data matrix for which we want to predict the targets.
Vector or matrix containing the predictions. In binary and multiclass problems, this is a vector containing n_samples
. In a multilabel problem, it returns a matrix of shape (n_samples, n_outputs)
.
Return accuracy on provided data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Test samples.
True labels for X
.
Sample weights.
Mean accuracy of self.predict(X)
w.r.t. y
.
Configure whether metadata should be requested to be passed to the fit
method.
Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True
(seesklearn.set_config
). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.
False
: metadata is not requested and the meta-estimator will not pass it tofit
.
None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Metadata routing for sample_weight
parameter in fit
.
The updated object.
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as Pipeline
). The latter have parameters of the form <component>__<parameter>
so that it’s possible to update each component of a nested object.
Estimator parameters.
Estimator instance.
Configure whether metadata should be requested to be passed to the score
method.
Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True
(seesklearn.set_config
). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.
False
: metadata is not requested and the meta-estimator will not pass it toscore
.
None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Metadata routing for sample_weight
parameter in score
.
The updated object.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4