pyspark.ml.fpm.
PrefixSpan
(*, minSupport: float = 0.1, maxPatternLength: int = 10, maxLocalProjDBSize: int = 32000000, sequenceCol: str = 'sequence')¶
A parallel PrefixSpan algorithm to mine frequent sequential patterns. The PrefixSpan algorithm is described in J. Pei, et al., PrefixSpan: Mining Sequential Patterns Efficiently by Prefix-Projected Pattern Growth (see here). This class is not yet an Estimator/Transformer, use findFrequentSequentialPatterns()
method to run the PrefixSpan algorithm.
Notes
See Sequential Pattern Mining (Wikipedia)
Examples
>>> from pyspark.ml.fpm import PrefixSpan >>> from pyspark.sql import Row >>> df = sc.parallelize([Row(sequence=[[1, 2], [3]]), ... Row(sequence=[[1], [3, 2], [1, 2]]), ... Row(sequence=[[1, 2], [5]]), ... Row(sequence=[[6]])]).toDF() >>> prefixSpan = PrefixSpan() >>> prefixSpan.getMaxLocalProjDBSize() 32000000 >>> prefixSpan.getSequenceCol() 'sequence' >>> prefixSpan.setMinSupport(0.5) PrefixSpan... >>> prefixSpan.setMaxPatternLength(5) PrefixSpan... >>> prefixSpan.findFrequentSequentialPatterns(df).sort("sequence").show(truncate=False) +----------+----+ |sequence |freq| +----------+----+ |[[1]] |3 | |[[1], [3]]|2 | |[[2]] |3 | |[[2, 1]] |3 | |[[3]] |2 | +----------+----+ ...
Methods
clear
(param)
Clears a param from the param map if it has been explicitly set.
copy
([extra])
Creates a copy of this instance with the same uid and some extra params.
explainParam
(param)
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
Returns the documentation of all params with their optionally default values and user-supplied values.
extractParamMap
([extra])
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
findFrequentSequentialPatterns
(dataset)
Finds the complete set of frequent sequential patterns in the input sequences of itemsets.
Gets the value of maxLocalProjDBSize or its default value.
Gets the value of maxPatternLength or its default value.
Gets the value of minSupport or its default value.
getOrDefault
(param)
Gets the value of a param in the user-supplied param map or its default value.
getParam
(paramName)
Gets a param by its name.
Gets the value of sequenceCol or its default value.
hasDefault
(param)
Checks whether a param has a default value.
hasParam
(paramName)
Tests whether this instance contains a param with a given (string) name.
isDefined
(param)
Checks whether a param is explicitly set by user or has a default value.
isSet
(param)
Checks whether a param is explicitly set by user.
set
(param, value)
Sets a parameter in the embedded param map.
setMaxLocalProjDBSize
(value)
Sets the value of maxLocalProjDBSize
.
setMaxPatternLength
(value)
Sets the value of maxPatternLength
.
setMinSupport
(value)
Sets the value of minSupport
.
setParams
(self, \*[, minSupport, â¦])
setSequenceCol
(value)
Sets the value of sequenceCol
.
Attributes
Methods Documentation
clear
(param: pyspark.ml.param.Param) → None¶
Clears a param from the param map if it has been explicitly set.
copy
(extra: Optional[ParamMap] = None) → JP¶
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
Extra parameters to copy to the new instance
JavaParams
Copy of this instance
explainParam
(param: Union[str, pyspark.ml.param.Param]) → str¶
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
explainParams
() → str¶
Returns the documentation of all params with their optionally default values and user-supplied values.
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
extra param values
merged param map
findFrequentSequentialPatterns
(dataset: pyspark.sql.dataframe.DataFrame) → pyspark.sql.dataframe.DataFrame¶
Finds the complete set of frequent sequential patterns in the input sequences of itemsets.
pyspark.sql.DataFrame
A dataframe containing a sequence column which is ArrayType(ArrayType(T)) type, T is the item type for the input dataset.
pyspark.sql.DataFrame
A DataFrame that contains columns of sequence and corresponding frequency. The schema of it will be:
sequence: ArrayType(ArrayType(T)) (T is the item type)
freq: Long
getMaxLocalProjDBSize
() → int¶
Gets the value of maxLocalProjDBSize or its default value.
getMaxPatternLength
() → int¶
Gets the value of maxPatternLength or its default value.
getMinSupport
() → float¶
Gets the value of minSupport or its default value.
getOrDefault
(param: Union[str, pyspark.ml.param.Param[T]]) → Union[Any, T]¶
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
getParam
(paramName: str) → pyspark.ml.param.Param¶
Gets a param by its name.
getSequenceCol
() → str¶
Gets the value of sequenceCol or its default value.
hasDefault
(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
Checks whether a param has a default value.
hasParam
(paramName: str) → bool¶
Tests whether this instance contains a param with a given (string) name.
isDefined
(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
Checks whether a param is explicitly set by user or has a default value.
isSet
(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
Checks whether a param is explicitly set by user.
set
(param: pyspark.ml.param.Param, value: Any) → None¶
Sets a parameter in the embedded param map.
setMaxLocalProjDBSize
(value: int) → pyspark.ml.fpm.PrefixSpan¶
Sets the value of maxLocalProjDBSize
.
setMaxPatternLength
(value: int) → pyspark.ml.fpm.PrefixSpan¶
Sets the value of maxPatternLength
.
setMinSupport
(value: float) → pyspark.ml.fpm.PrefixSpan¶
Sets the value of minSupport
.
setParams
(self, \*, minSupport=0.1, maxPatternLength=10, maxLocalProjDBSize=32000000, sequenceCol="sequence")¶
setSequenceCol
(value: str) → pyspark.ml.fpm.PrefixSpan¶
Sets the value of sequenceCol
.
Attributes Documentation
maxLocalProjDBSize
: pyspark.ml.param.Param[int] = Param(parent='undefined', name='maxLocalProjDBSize', doc='The maximum number of items (including delimiters used in the internal storage format) allowed in a projected database before local processing. If a projected database exceeds this size, another iteration of distributed prefix growth is run. Must be > 0.')¶
maxPatternLength
: pyspark.ml.param.Param[int] = Param(parent='undefined', name='maxPatternLength', doc='The maximal length of the sequential pattern. Must be > 0.')¶
minSupport
: pyspark.ml.param.Param[float] = Param(parent='undefined', name='minSupport', doc='The minimal support level of the sequential pattern. Sequential pattern that appears more than (minSupport * size-of-the-dataset) times will be output. Must be >= 0.')¶
params
¶
Returns all params ordered by name. The default implementation uses dir()
to get all attributes of type Param
.
sequenceCol
: pyspark.ml.param.Param[str] = Param(parent='undefined', name='sequenceCol', doc='The name of the sequence column in dataset, rows with nulls in this column are ignored.')¶
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4