Serializable
, org.apache.spark.internal.Logging
, OneHotEncoderBase
, Params
, HasHandleInvalid
, HasInputCol
, HasInputCols
, HasOutputCol
, HasOutputCols
, Identifiable
, MLWritable
param: categorySizes Original number of categories for each feature being encoded. The array contains one value for each input column, in order.
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter
int[]
Creates a copy of this instance with the same UID and some extra params.
Whether to drop the last category in the encoded vector (default: true)
Param for how to handle invalid data during transform().
Param for input column name.
Param for input column names.
Param for output column name.
Param for output column names.
Transforms the input dataset.
Check transform validity and derive the output schema from the input schema.
An immutable unique ID for the object and its derivatives.
Returns an MLWriter
instance for this ML instance.
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContext
Methods inherited from interface org.apache.spark.ml.util.MLWritablesave
Methods inherited from interface org.apache.spark.ml.param.Paramsclear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, onParamChange, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
Param for how to handle invalid data during transform(). Options are 'keep' (invalid data presented as an extra categorical feature) or 'error' (throw an error). Note that this Param is only used during transform; during fitting, invalid data will result in an error. Default: "error"
handleInvalid
in interface HasHandleInvalid
handleInvalid
in interface OneHotEncoderBase
Whether to drop the last category in the encoded vector (default: true)
dropLast
in interface OneHotEncoderBase
Param for output column names.
outputCols
in interface HasOutputCols
Param for output column name.
outputCol
in interface HasOutputCol
Param for input column names.
inputCols
in interface HasInputCols
Param for input column name.
inputCol
in interface HasInputCol
An immutable unique ID for the object and its derivatives.
uid
in interface Identifiable
public int[] categorySizes()
Check transform validity and derive the output schema from the input schema.
We check validity for interactions between parameters during transformSchema
and raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled by Param.validate()
.
Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
transformSchema
in class PipelineStage
schema
- (undocumented)
Transforms the input dataset.
transform
in class Transformer
dataset
- (undocumented)
Params
Creates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. See defaultCopy()
.
copy
in interface Params
copy
in class Model<OneHotEncoderModel>
extra
- (undocumented)
Returns an MLWriter
instance for this ML instance.
write
in interface MLWritable
toString
in interface Identifiable
toString
in class Object
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4