Generalized Linear Model with a Poisson distribution.
This regressor uses the ‘log’ link function.
Read more in the User Guide.
Added in version 0.23.
Constant that multiplies the L2 penalty term and determines the regularization strength. alpha = 0
is equivalent to unpenalized GLMs. In this case, the design matrix X
must have full column rank (no collinearities). Values of alpha
must be in the range [0.0, inf)
.
Specifies if a constant (a.k.a. bias or intercept) should be added to the linear predictor (X @ coef + intercept
).
Algorithm to use in the optimization problem:
Calls scipy’s L-BFGS-B optimizer.
Uses Newton-Raphson steps (in arbitrary precision arithmetic equivalent to iterated reweighted least squares) with an inner Cholesky based solver. This solver is a good choice for n_samples
>> n_features
, especially with one-hot encoded categorical features with rare categories. Be aware that the memory usage of this solver has a quadratic dependency on n_features
because it explicitly computes the Hessian matrix.
Added in version 1.2.
The maximal number of iterations for the solver. Values must be in the range [1, inf)
.
Stopping criterion. For the lbfgs solver, the iteration will stop when max{|g_j|, j = 1, ..., d} <= tol
where g_j
is the j-th component of the gradient (derivative) of the objective function. Values must be in the range (0.0, inf)
.
If set to True
, reuse the solution of the previous call to fit
as initialization for coef_
and intercept_
.
For the lbfgs solver set verbose to any positive number for verbosity. Values must be in the range [0, inf)
.
Estimated coefficients for the linear predictor (X @ coef_ + intercept_
) in the GLM.
Intercept (a.k.a. bias) added to linear predictor.
Number of features seen during fit.
Added in version 0.24.
n_features_in_
,)
Names of features seen during fit. Defined only when X
has feature names that are all strings.
Added in version 1.0.
Actual number of iterations used in the solver.
Examples
>>> from sklearn import linear_model >>> clf = linear_model.PoissonRegressor() >>> X = [[1, 2], [2, 3], [3, 4], [4, 3]] >>> y = [12, 17, 22, 21] >>> clf.fit(X, y) PoissonRegressor() >>> clf.score(X, y) np.float64(0.990) >>> clf.coef_ array([0.121, 0.158]) >>> clf.intercept_ np.float64(2.088) >>> clf.predict([[1, 1], [3, 4]]) array([10.676, 21.875])
Fit a Generalized Linear Model.
Training data.
Target values.
Sample weights.
Fitted model.
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
A MetadataRequest
encapsulating routing information.
Get parameters for this estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Predict using GLM with feature matrix X.
Samples.
Returns predicted values.
Compute D^2, the percentage of deviance explained.
D^2 is a generalization of the coefficient of determination R^2. R^2 uses squared error and D^2 uses the deviance of this GLM, see the User Guide.
D^2 is defined as \(D^2 = 1-\frac{D(y_{true},y_{pred})}{D_{null}}\), \(D_{null}\) is the null deviance, i.e. the deviance of a model with intercept alone, which corresponds to \(y_{pred} = \bar{y}\). The mean \(\bar{y}\) is averaged by sample_weight. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse).
Test samples.
True values of target.
Sample weights.
D^2 of self.predict(X) w.r.t. y.
Configure whether metadata should be requested to be passed to the fit
method.
Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True
(seesklearn.set_config
). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.
False
: metadata is not requested and the meta-estimator will not pass it tofit
.
None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Metadata routing for sample_weight
parameter in fit
.
The updated object.
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as Pipeline
). The latter have parameters of the form <component>__<parameter>
so that it’s possible to update each component of a nested object.
Estimator parameters.
Estimator instance.
Configure whether metadata should be requested to be passed to the score
method.
Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True
(seesklearn.set_config
). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.
False
: metadata is not requested and the meta-estimator will not pass it toscore
.
None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Metadata routing for sample_weight
parameter in score
.
The updated object.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4