Computes specified statistics for numeric and string columns. Available statistics are:
count
mean
stddev
min
max
arbitrary approximate percentiles specified as a percentage (e.g., "75%")
If no statistics are given, this function computes count, mean, stddev, min, approximate quartiles (percentiles at 25%, 50%, and 75%), and max. This function is meant for exploratory data analysis, as we make no guarantee about the backward compatibility of the schema of the resulting Dataset. If you want to programmatically compute summary statistics, use the agg
function instead.
summary(object, ...)
# S4 method for class 'SparkDataFrame'
summary(object, ...)
Arguments
a SparkDataFrame to be summarized.
(optional) statistics to be computed for all columns.
summary(SparkDataFrame) since 1.5.0
The statistics provided by summary
were change in 2.3.0 use describe for previous defaults.
Other SparkDataFrame functions: SparkDataFrame-class
, agg()
, alias()
, arrange()
, as.data.frame()
, attach,SparkDataFrame-method
, broadcast()
, cache()
, checkpoint()
, coalesce()
, collect()
, colnames()
, coltypes()
, createOrReplaceTempView()
, crossJoin()
, cube()
, dapplyCollect()
, dapply()
, describe()
, dim()
, distinct()
, dropDuplicates()
, dropna()
, drop()
, dtypes()
, exceptAll()
, except()
, explain()
, filter()
, first()
, gapplyCollect()
, gapply()
, getNumPartitions()
, group_by()
, head()
, hint()
, histogram()
, insertInto()
, intersectAll()
, intersect()
, isLocal()
, isStreaming()
, join()
, limit()
, localCheckpoint()
, merge()
, mutate()
, ncol()
, nrow()
, persist()
, printSchema()
, randomSplit()
, rbind()
, rename()
, repartitionByRange()
, repartition()
, rollup()
, sample()
, saveAsTable()
, schema()
, selectExpr()
, select()
, showDF()
, show()
, storageLevel()
, str()
, subset()
, take()
, toJSON()
, unionAll()
, unionByName()
, union()
, unpersist()
, unpivot()
, withColumn()
, withWatermark()
, with()
, write.df()
, write.jdbc()
, write.json()
, write.orc()
, write.parquet()
, write.stream()
, write.text()
if (FALSE) { # \dontrun{
sparkR.session()
path <- "path/to/file.json"
df <- read.json(path)
summary(df)
summary(df, "min", "25%", "75%", "max")
summary(select(df, "age", "height"))
} # }
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4