A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://spark.apache.org/docs/latest/api/R/reference/withColumn.html below:

WithColumn — withColumn • SparkR

Return a new SparkDataFrame by adding a column or replacing the existing column that has the same name.

Usage
withColumn(x, colName, col)

# S4 method for class 'SparkDataFrame,character'
withColumn(x, colName, col)
Arguments
x

a SparkDataFrame.

colName

a column name.

col

a Column expression (which must refer only to this SparkDataFrame), or an atomic vector in the length of 1 as literal value.

Value

A SparkDataFrame with the new column added or the existing column replaced.

Details

Note: This method introduces a projection internally. Therefore, calling it multiple times, for instance, via loops in order to add multiple columns can generate big plans which can cause performance issues and even StackOverflowException. To avoid this, use select with the multiple columns at once.

Note

withColumn since 1.4.0

See also

rename mutate subset

Other SparkDataFrame functions: SparkDataFrame-class, agg(), alias(), arrange(), as.data.frame(), attach,SparkDataFrame-method, broadcast(), cache(), checkpoint(), coalesce(), collect(), colnames(), coltypes(), createOrReplaceTempView(), crossJoin(), cube(), dapplyCollect(), dapply(), describe(), dim(), distinct(), dropDuplicates(), dropna(), drop(), dtypes(), exceptAll(), except(), explain(), filter(), first(), gapplyCollect(), gapply(), getNumPartitions(), group_by(), head(), hint(), histogram(), insertInto(), intersectAll(), intersect(), isLocal(), isStreaming(), join(), limit(), localCheckpoint(), merge(), mutate(), ncol(), nrow(), persist(), printSchema(), randomSplit(), rbind(), rename(), repartitionByRange(), repartition(), rollup(), sample(), saveAsTable(), schema(), selectExpr(), select(), showDF(), show(), storageLevel(), str(), subset(), summary(), take(), toJSON(), unionAll(), unionByName(), union(), unpersist(), unpivot(), withWatermark(), with(), write.df(), write.jdbc(), write.json(), write.orc(), write.parquet(), write.stream(), write.text()

Examples
if (FALSE) { # \dontrun{
sparkR.session()
path <- "path/to/file.json"
df <- read.json(path)
newDF <- withColumn(df, "newCol", df$col1 * 5)
# Replace an existing column
newDF2 <- withColumn(newDF, "newCol", newDF$col1)
newDF3 <- withColumn(newDF, "newCol", 42)
# Use extract operator to set an existing or new column
df[["age"]] <- 23
df[[2]] <- df$col1
df[[2]] <- NULL # drop column
} # }

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4