Soft Voting/Majority Rule classifier for unfitted estimators.
Read more in the User Guide.
Added in version 0.17.
Invoking the fit
method on the VotingClassifier
will fit clones of those original estimators that will be stored in the class attribute self.estimators_
. An estimator can be set to 'drop'
using set_params
.
Changed in version 0.21: 'drop'
is accepted. Using None was deprecated in 0.22 and support was removed in 0.24.
If ‘hard’, uses predicted class labels for majority rule voting. Else if ‘soft’, predicts the class label based on the argmax of the sums of the predicted probabilities, which is recommended for an ensemble of well-calibrated classifiers.
Sequence of weights (float
or int
) to weight the occurrences of predicted class labels (hard
voting) or class probabilities before averaging (soft
voting). Uses uniform weights if None
.
The number of jobs to run in parallel for fit
. None
means 1 unless in a joblib.parallel_backend
context. -1
means using all processors. See Glossary for more details.
Added in version 0.18.
Affects shape of transform output only when voting=’soft’ If voting=’soft’ and flatten_transform=True, transform method returns matrix with shape (n_samples, n_classifiers * n_classes). If flatten_transform=False, it returns (n_classifiers, n_samples, n_classes).
If True, the time elapsed while fitting will be printed as it is completed.
Added in version 0.23.
The collection of fitted sub-estimators as defined in estimators
that are not ‘drop’.
Bunch
Attribute to access any fitted sub-estimators by name.
Added in version 0.20.
LabelEncoder
Transformer used to encode the labels during fit and decode during prediction.
The classes labels.
n_features_in_
int
Number of features seen during fit.
n_features_in_
,)
Names of features seen during fit. Only defined if the underlying estimators expose such an attribute when fit.
Added in version 1.0.
Examples
>>> import numpy as np >>> from sklearn.linear_model import LogisticRegression >>> from sklearn.naive_bayes import GaussianNB >>> from sklearn.ensemble import RandomForestClassifier, VotingClassifier >>> clf1 = LogisticRegression(random_state=1) >>> clf2 = RandomForestClassifier(n_estimators=50, random_state=1) >>> clf3 = GaussianNB() >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) >>> y = np.array([1, 1, 1, 2, 2, 2]) >>> eclf1 = VotingClassifier(estimators=[ ... ('lr', clf1), ('rf', clf2), ('gnb', clf3)], voting='hard') >>> eclf1 = eclf1.fit(X, y) >>> print(eclf1.predict(X)) [1 1 1 2 2 2] >>> np.array_equal(eclf1.named_estimators_.lr.predict(X), ... eclf1.named_estimators_['lr'].predict(X)) True >>> eclf2 = VotingClassifier(estimators=[ ... ('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='soft') >>> eclf2 = eclf2.fit(X, y) >>> print(eclf2.predict(X)) [1 1 1 2 2 2]
To drop an estimator, set_params
can be used to remove it. Here we dropped one of the estimators, resulting in 2 fitted estimators:
>>> eclf2 = eclf2.set_params(lr='drop') >>> eclf2 = eclf2.fit(X, y) >>> len(eclf2.estimators_) 2
Setting flatten_transform=True
with voting='soft'
flattens output shape of transform
:
>>> eclf3 = VotingClassifier(estimators=[ ... ('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='soft', weights=[2,1,1], ... flatten_transform=True) >>> eclf3 = eclf3.fit(X, y) >>> print(eclf3.predict(X)) [1 1 1 2 2 2] >>> print(eclf3.transform(X).shape) (6, 6)
Fit the estimators.
Training vectors, where n_samples
is the number of samples and n_features
is the number of features.
Target values.
Parameters to pass to the underlying estimators.
Added in version 1.5: Only available if enable_metadata_routing=True
, which can be set by using sklearn.set_config(enable_metadata_routing=True)
. See Metadata Routing User Guide for more details.
Returns the instance itself.
Return class labels or probabilities for each estimator.
Return predictions for X for each estimator.
Input samples.
Target values (None for unsupervised transformations).
Additional fit parameters.
Transformed array.
Get output feature names for transformation.
Not used, present here for API consistency by convention.
Transformed feature names.
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
Added in version 1.5.
A MetadataRouter
encapsulating routing information.
Get the parameters of an estimator from the ensemble.
Returns the parameters given in the constructor as well as the estimators contained within the estimators
parameter.
Setting it to True gets the various estimators and the parameters of the estimators as well.
Parameter and estimator names mapped to their values or parameter names mapped to their values.
Dictionary to access any fitted sub-estimators by name.
Bunch
Predict class labels for X.
The input samples.
Predicted class labels.
Compute probabilities of possible outcomes for samples in X.
The input samples.
Weighted average probability for each class per sample.
Return accuracy on provided data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Test samples.
True labels for X
.
Sample weights.
Mean accuracy of self.predict(X)
w.r.t. y
.
Set output container.
See Introducing the set_output API for an example on how to use the API.
Configure output of transform
and fit_transform
.
"default"
: Default output format of a transformer
"pandas"
: DataFrame output
"polars"
: Polars output
None
: Transform configuration is unchanged
Added in version 1.4: "polars"
option was added.
Estimator instance.
Set the parameters of an estimator from the ensemble.
Valid parameter keys can be listed with get_params()
. Note that you can directly set the parameters of the estimators contained in estimators
.
Specific parameters using e.g. set_params(parameter_name=new_value)
. In addition, to setting the parameters of the estimator, the individual estimator of the estimators can also be set, or can be removed by setting them to ‘drop’.
Estimator instance.
Configure whether metadata should be requested to be passed to the score
method.
Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True
(seesklearn.set_config
). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.
False
: metadata is not requested and the meta-estimator will not pass it toscore
.
None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Metadata routing for sample_weight
parameter in score
.
The updated object.
Return class labels or probabilities for X for each estimator.
Training vectors, where n_samples
is the number of samples and n_features
is the number of features.
voting='soft'
and flatten_transform=True
:
returns ndarray of shape (n_samples, n_classifiers * n_classes), being class probabilities calculated by each classifier.
voting='soft' and `flatten_transform=False
:
ndarray of shape (n_classifiers, n_samples, n_classes)
voting='hard'
:
ndarray of shape (n_samples, n_classifiers), being class labels predicted by each classifier.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4