Scale each feature to the [-1, 1] range without breaking the sparsity.
This estimator scales each feature individually such that the maximal absolute value of each feature in the training set will be 1.0.
This scaler can also be applied to sparse CSR or CSC matrices.
The data.
Axis used to scale along. If 0, independently scale each feature, otherwise (if 1) scale each sample.
If False, try to avoid a copy and scale in place. This is not guaranteed to always work in place; e.g. if the data is a numpy array with an int dtype, a copy will be returned even with copy=False.
The transformed data.
Warning
Risk of data leak Do not use maxabs_scale
unless you know what you are doing. A common mistake is to apply it to the entire data before splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using MaxAbsScaler
within a Pipeline in order to prevent most risks of data leaking: pipe = make_pipeline(MaxAbsScaler(), LogisticRegression())
.
See also
MaxAbsScaler
Performs scaling to the [-1, 1] range using the Transformer API (e.g. as part of a preprocessing Pipeline
).
Notes
NaNs are treated as missing values: disregarded to compute the statistics, and maintained during the data transformation.
For a comparison of the different scalers, transformers, and normalizers, see: Compare the effect of different scalers on data with outliers.
Examples
>>> from sklearn.preprocessing import maxabs_scale >>> X = [[-2, 1, 2], [-1, 0, 1]] >>> maxabs_scale(X, axis=0) # scale each column independently array([[-1. , 1. , 1. ], [-0.5, 0. , 0.5]]) >>> maxabs_scale(X, axis=1) # scale each row independently array([[-1. , 0.5, 1. ], [-1. , 0. , 1. ]])
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4