Return the lowest bound for C
.
The lower bound for C
is computed such that for C
in (l1_min_C, infinity)
the model is guaranteed not to be empty. This applies to l1 penalized classifiers, such as sklearn.svm.LinearSVC
with penalty=’l1’ and sklearn.linear_model.LogisticRegression
with penalty=’l1’.
This value is valid if class_weight
parameter in fit()
is not set.
For an example of how to use this function, see Regularization path of L1- Logistic Regression.
Training vector, where n_samples
is the number of samples and n_features
is the number of features.
Target vector relative to X.
Specifies the loss function. With ‘squared_hinge’ it is the squared hinge loss (a.k.a. L2 loss). With ‘log’ it is the loss of logistic regression models.
Specifies if the intercept should be fitted by the model. It must match the fit() method parameter.
When fit_intercept is True, instance vector x becomes [x, intercept_scaling], i.e. a “synthetic” feature with constant value equals to intercept_scaling is appended to the instance vector. It must match the fit() method parameter.
Minimum value for C.
Examples
>>> from sklearn.svm import l1_min_c >>> from sklearn.datasets import make_classification >>> X, y = make_classification(n_samples=100, n_features=20, random_state=42) >>> print(f"{l1_min_c(X, y, loss='squared_hinge', fit_intercept=True):.4f}") 0.0044
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4