Learning curve.
Determines cross-validated training and test scores for different training set sizes.
A cross-validation generator splits the whole dataset k times in training and test data. Subsets of the training set with varying sizes will be used to train the estimator and a score for each training subset size and the test set will be computed. Afterwards, the scores will be averaged over all k runs for each training subset size.
Read more in the User Guide.
An object of that type which is cloned for each validation. It must also implement “predict” unless scoring
is a callable that doesn’t rely on “predict” to compute a score.
Training vector, where n_samples
is the number of samples and n_features
is the number of features.
Target relative to X for classification or regression; None for unsupervised learning.
Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” cv instance (e.g., GroupKFold
).
Changed in version 1.6: groups
can only be passed if metadata routing is not enabled via sklearn.set_config(enable_metadata_routing=True)
. When routing is enabled, pass groups
alongside other metadata via the params
argument instead. E.g.: learning_curve(..., params={'groups': groups})
.
Relative or absolute numbers of training examples that will be used to generate the learning curve. If the dtype is float, it is regarded as a fraction of the maximum size of the training set (that is determined by the selected validation method), i.e. it has to be within (0, 1]. Otherwise it is interpreted as absolute sizes of the training sets. Note that for classification the number of samples usually has to be big enough to contain at least one sample from each class.
Determines the cross-validation splitting strategy. Possible inputs for cv are:
None, to use the default 5-fold cross validation,
int, to specify the number of folds in a (Stratified)KFold
,
An iterable yielding (train, test) splits as arrays of indices.
For int/None inputs, if the estimator is a classifier and y
is either binary or multiclass, StratifiedKFold
is used. In all other cases, KFold
is used. These splitters are instantiated with shuffle=False
so the splits will be the same across calls.
Refer User Guide for the various cross-validation strategies that can be used here.
Changed in version 0.22: cv
default value if None changed from 3-fold to 5-fold.
Scoring method to use to evaluate the training and test sets.
str: see String name scorers for options.
callable: a scorer callable object (e.g., function) with signature scorer(estimator, X, y)
. See Callable scorers for details.
None
: the estimator
’s default evaluation criterion is used.
If the estimator supports incremental learning, this will be used to speed up fitting for different training set sizes.
Number of jobs to run in parallel. Training the estimator and computing the score are parallelized over the different training and test sets. None
means 1 unless in a joblib.parallel_backend
context. -1
means using all processors. See Glossary for more details.
Number of predispatched jobs for parallel execution (default is all). The option can reduce the allocated memory. The str can be an expression like ‘2*n_jobs’.
Controls the verbosity: the higher, the more messages.
Whether to shuffle training data before taking prefixes of it based on``train_sizes``.
Used when shuffle
is True. Pass an int for reproducible output across multiple function calls. See Glossary.
Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised.
Added in version 0.20.
Whether to return the fit and score times.
Parameters to pass to the fit method of the estimator.
Deprecated since version 1.6: This parameter is deprecated and will be removed in version 1.8. Use params
instead.
Parameters to pass to the fit
method of the estimator and to the scorer.
If enable_metadata_routing=False
(default): Parameters directly passed to the fit
method of the estimator.
If enable_metadata_routing=True
: Parameters safely routed to the fit
method of the estimator. See Metadata Routing User Guide for more details.
Added in version 1.6.
Numbers of training examples that has been used to generate the learning curve. Note that the number of ticks might be less than n_ticks because duplicate entries will be removed.
Scores on training sets.
Scores on test set.
Times spent for fitting in seconds. Only present if return_times
is True.
Times spent for scoring in seconds. Only present if return_times
is True.
Examples
>>> from sklearn.datasets import make_classification >>> from sklearn.tree import DecisionTreeClassifier >>> from sklearn.model_selection import learning_curve >>> X, y = make_classification(n_samples=100, n_features=10, random_state=42) >>> tree = DecisionTreeClassifier(max_depth=4, random_state=42) >>> train_size_abs, train_scores, test_scores = learning_curve( ... tree, X, y, train_sizes=[0.3, 0.6, 0.9] ... ) >>> for train_size, cv_train_scores, cv_test_scores in zip( ... train_size_abs, train_scores, test_scores ... ): ... print(f"{train_size} samples were used to train the model") ... print(f"The average train accuracy is {cv_train_scores.mean():.2f}") ... print(f"The average test accuracy is {cv_test_scores.mean():.2f}") 24 samples were used to train the model The average train accuracy is 1.00 The average test accuracy is 0.85 48 samples were used to train the model The average train accuracy is 1.00 The average test accuracy is 0.90 72 samples were used to train the model The average train accuracy is 1.00 The average test accuracy is 0.93
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4