Serializable
, org.apache.spark.internal.Logging
, Params
, HasHandleInvalid
, HasInputCols
, HasOutputCol
, DefaultParamsWritable
, Identifiable
, MLWritable
This requires one pass over the entire dataset. In case we need to infer column lengths from the data we require an additional call to the 'first' Dataset method, see 'handleInvalid' parameter.
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter
Constructors
Creates a copy of this instance with the same UID and some extra params.
Param for how to handle invalid data (NULL values).
Param for input column names.
Param for output column name.
Transforms the input dataset.
Check transform validity and derive the output schema from the input schema.
An immutable unique ID for the object and its derivatives.
Methods inherited from interface org.apache.spark.internal.LogginginitializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContext
Methods inherited from interface org.apache.spark.ml.util.MLWritablesave
Methods inherited from interface org.apache.spark.ml.param.Paramsclear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, onParamChange, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
public VectorAssembler()
Param for output column name.
outputCol
in interface HasOutputCol
Param for input column names.
inputCols
in interface HasInputCols
An immutable unique ID for the object and its derivatives.
uid
in interface Identifiable
Param for how to handle invalid data (NULL values). Options are 'skip' (filter out rows with invalid data), 'error' (throw an error), or 'keep' (return relevant number of NaN in the output). Column lengths are taken from the size of ML Attribute Group, which can be set using VectorSizeHint
in a pipeline before VectorAssembler
. Column lengths can also be inferred from first rows of the data since it is safe to do so but only in case of 'error' or 'skip'. Default: "error"
handleInvalid
in interface HasHandleInvalid
Transforms the input dataset.
transform
in class Transformer
dataset
- (undocumented)
Check transform validity and derive the output schema from the input schema.
We check validity for interactions between parameters during transformSchema
and raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled by Param.validate()
.
Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
transformSchema
in class PipelineStage
schema
- (undocumented)
Params
Creates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. See defaultCopy()
.
copy
in interface Params
copy
in class Transformer
extra
- (undocumented)
toString
in interface Identifiable
toString
in class Object
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4