Extracts patches from a collection of images.
Read more in the User Guide.
Added in version 0.9.
The dimensions of one patch. If set to None, the patch size will be automatically set to (img_height // 10, img_width // 10)
, where img_height
and img_width
are the dimensions of the input images.
The maximum number of patches per image to extract. If max_patches
is a float in (0, 1), it is taken to mean a proportion of the total number of patches. If set to None, extract all possible patches.
Determines the random number generator used for random sampling when max_patches is not None
. Use an int to make the randomness deterministic. See Glossary.
Notes
This estimator is stateless and does not need to be fitted. However, we recommend to call fit_transform
instead of transform
, as parameter validation is only performed in fit
.
Examples
>>> from sklearn.datasets import load_sample_images >>> from sklearn.feature_extraction import image >>> # Use the array data from the second image in this dataset: >>> X = load_sample_images().images[1] >>> X = X[None, ...] >>> print(f"Image shape: {X.shape}") Image shape: (1, 427, 640, 3) >>> pe = image.PatchExtractor(patch_size=(10, 10)) >>> pe_trans = pe.transform(X) >>> print(f"Patches shape: {pe_trans.shape}") Patches shape: (263758, 10, 10, 3) >>> X_reconstructed = image.reconstruct_from_patches_2d(pe_trans, X.shape[1:]) >>> print(f"Reconstructed shape: {X_reconstructed.shape}") Reconstructed shape: (427, 640, 3)
Only validate the parameters of the estimator.
This method allows to: (i) validate the parameters of the estimator and (ii) be consistent with the scikit-learn transformer API.
Array of images from which to extract patches. For color images, the last dimension specifies the channel: a RGB image would have n_channels=3
.
Not used, present for API consistency by convention.
Returns the instance itself.
Fit to data, then transform it.
Fits transformer to X
and y
with optional parameters fit_params
and returns a transformed version of X
.
Input samples.
Target values (None for unsupervised transformations).
Additional fit parameters.
Transformed array.
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
A MetadataRequest
encapsulating routing information.
Get parameters for this estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Set output container.
See Introducing the set_output API for an example on how to use the API.
Configure output of transform
and fit_transform
.
"default"
: Default output format of a transformer
"pandas"
: DataFrame output
"polars"
: Polars output
None
: Transform configuration is unchanged
Added in version 1.4: "polars"
option was added.
Estimator instance.
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as Pipeline
). The latter have parameters of the form <component>__<parameter>
so that it’s possible to update each component of a nested object.
Estimator parameters.
Estimator instance.
Transform the image samples in X
into a matrix of patch data.
Array of images from which to extract patches. For color images, the last dimension specifies the channel: a RGB image would have n_channels=3
.
The collection of patches extracted from the images, where n_patches
is either n_samples * max_patches
or the total number of patches that can be extracted.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4