Class-wise stratified ShuffleSplit cross-validator.
Provides train/test indices to split data in train/test sets.
This cross-validation object is a merge of StratifiedKFold
and ShuffleSplit
, which returns stratified randomized folds. The folds are made by preserving the percentage of samples for each class in y
in a binary or multiclass classification setting.
Note: like the ShuffleSplit
strategy, stratified random splits do not guarantee that test sets across all folds will be mutually exclusive, and might include overlapping samples. However, this is still very likely for sizeable datasets.
Read more in the User Guide.
For visualisation of cross-validation behaviour and comparison between common scikit-learn split methods refer to Visualizing cross-validation behavior in scikit-learn
Number of re-shuffling & splitting iterations.
If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the test split. If int, represents the absolute number of test samples. If None, the value is set to the complement of the train size. If train_size
is also None, it will be set to 0.1.
If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the train split. If int, represents the absolute number of train samples. If None, the value is automatically set to the complement of the test size.
Controls the randomness of the training and testing indices produced. Pass an int for reproducible output across multiple function calls. See Glossary.
Examples
>>> import numpy as np >>> from sklearn.model_selection import StratifiedShuffleSplit >>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]]) >>> y = np.array([0, 0, 0, 1, 1, 1]) >>> sss = StratifiedShuffleSplit(n_splits=5, test_size=0.5, random_state=0) >>> sss.get_n_splits(X, y) 5 >>> print(sss) StratifiedShuffleSplit(n_splits=5, random_state=0, ...) >>> for i, (train_index, test_index) in enumerate(sss.split(X, y)): ... print(f"Fold {i}:") ... print(f" Train: index={train_index}") ... print(f" Test: index={test_index}") Fold 0: Train: index=[5 2 3] Test: index=[4 1 0] Fold 1: Train: index=[5 1 4] Test: index=[0 2 3] Fold 2: Train: index=[5 0 2] Test: index=[4 3 1] Fold 3: Train: index=[4 1 0] Test: index=[2 3 5] Fold 4: Train: index=[0 5 1] Test: index=[3 4 2]
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
A MetadataRequest
encapsulating routing information.
Returns the number of splitting iterations in the cross-validator.
Always ignored, exists for compatibility.
Always ignored, exists for compatibility.
Always ignored, exists for compatibility.
Returns the number of splitting iterations in the cross-validator.
Generate indices to split data into training and test set.
Training data, where n_samples
is the number of samples and n_features
is the number of features.
Note that providing y
is sufficient to generate the splits and hence np.zeros(n_samples)
may be used as a placeholder for X
instead of actual training data.
The target variable for supervised learning problems. Stratification is done based on the y labels.
Always ignored, exists for compatibility.
The training set indices for that split.
The testing set indices for that split.
Notes
Randomized CV splitters may return different results for each call of split. You can make the results identical by setting random_state
to an integer.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4