Serializable
, org.apache.spark.internal.Logging
, RobustScalerParams
, Params
, HasInputCol
, HasOutputCol
, HasRelativeError
, DefaultParamsWritable
, Identifiable
, MLWritable
Scale features using statistics that are robust to outliers. RobustScaler removes the median and scales the data according to the quantile range. The quantile range is by default IQR (Interquartile Range, quantile range between the 1st quartile = 25th quantile and the 3rd quartile = 75th quantile) but can be configured. Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Median and quantile range are then stored to be used on later data using the transform method. Standardization of a dataset is a common requirement for many machine learning estimators. Typically this is done by removing the mean and scaling to unit variance. However, outliers can often influence the sample mean / variance in a negative way. In such cases, the median and the quantile range often give better results. Note that NaN values are ignored in the computation of medians and ranges.
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter
Constructors
Creates a copy of this instance with the same UID and some extra params.
Fits a model to the input data.
Param for input column name.
Lower quantile to calculate quantile range, shared by all features Default: 0.25
Param for output column name.
Param for the relative target precision for the approximate quantile algorithm.
Check transform validity and derive the output schema from the input schema.
An immutable unique ID for the object and its derivatives.
Upper quantile to calculate quantile range, shared by all features Default: 0.75
Whether to center the data with median before scaling.
Whether to scale the data to quantile range.
Methods inherited from interface org.apache.spark.internal.LogginginitializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContext
Methods inherited from interface org.apache.spark.ml.util.MLWritablesave
Methods inherited from interface org.apache.spark.ml.param.Paramsclear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, onParamChange, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
public RobustScaler()
Lower quantile to calculate quantile range, shared by all features Default: 0.25
lower
in interface RobustScalerParams
Upper quantile to calculate quantile range, shared by all features Default: 0.75
upper
in interface RobustScalerParams
Whether to center the data with median before scaling. It will build a dense output, so take care when applying to sparse input. Default: false
withCentering
in interface RobustScalerParams
Whether to scale the data to quantile range. Default: true
withScaling
in interface RobustScalerParams
Param for the relative target precision for the approximate quantile algorithm. Must be in the range [0, 1].
relativeError
in interface HasRelativeError
Param for output column name.
outputCol
in interface HasOutputCol
Param for input column name.
inputCol
in interface HasInputCol
An immutable unique ID for the object and its derivatives.
uid
in interface Identifiable
Fits a model to the input data.
fit
in class Estimator<RobustScalerModel>
dataset
- (undocumented)
Check transform validity and derive the output schema from the input schema.
We check validity for interactions between parameters during transformSchema
and raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled by Param.validate()
.
Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
transformSchema
in class PipelineStage
schema
- (undocumented)
Params
Creates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. See defaultCopy()
.
copy
in interface Params
copy
in class Estimator<RobustScalerModel>
extra
- (undocumented)
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4