org.apache.spark.sql.SQLContext
Serializable
, org.apache.spark.internal.Logging
The entry point for working with structured data (rows and columns) in Spark 1.x.
As of Spark 2.0, this is replaced by SparkSession
. However, we are keeping the class here for backward compatibility.
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter
Convert a BaseRelation
created for external data sources into a DataFrame
.
void
Caches the specified table in-memory.
static void
void
Removes all cached tables from the in-memory cache.
Applies a schema to a List of Java Beans.
:: DeveloperApi :: Creates a
DataFrame
from a
JList
containing
Row
s using the given schema.
Applies a schema to an RDD of Java Beans.
:: DeveloperApi :: Creates a
DataFrame
from a
JavaRDD
containing
Row
s using the given schema.
Applies a schema to an RDD of Java Beans.
Creates a DataFrame from an RDD of Product (e.g.
:: DeveloperApi :: Creates a
DataFrame
from an
RDD
containing
Row
s using the given schema.
createDataFrame(scala.collection.immutable.Seq<A> data, scala.reflect.api.TypeTags.TypeTag<A> evidence$2)
Creates a DataFrame from a local Seq of Product.
Creates a
Dataset
from a
JList
of a given type.
Creates a
Dataset
from an RDD of a given type.
Creates a
Dataset
from a local Seq of data of a given type.
void
Drops the temporary table with the given table name in the catalog.
Returns a DataFrame
with no rows or columns.
:: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.
Return all the configuration properties that have been set (i.e.
Return the value of Spark SQL configuration property for the given key.
Return the value of Spark SQL configuration property for the given key.
(Scala-specific) Implicit methods available in Scala for converting common Scala objects into DataFrame
s.
boolean
Returns true if the table is currently cached in-memory.
An interface to register custom QueryExecutionListener that listen for execution metrics.
Returns a
SQLContext
as new session, with separated SQL configurations, temporary tables, registered functions, but sharing the same
SparkContext
, cached data and other things.
Creates a DataFrame
with a single LongType
column named id
, containing elements in a range from 0 to end
(exclusive) with step value 1.
range(long start, long end)
Creates a DataFrame
with a single LongType
column named id
, containing elements in a range from start
to end
(exclusive) with step value 1.
range(long start, long end, long step)
Creates a DataFrame
with a single LongType
column named id
, containing elements in a range from start
to end
(exclusive) with a step value.
range(long start, long end, long step, int numPartitions)
Creates a DataFrame
with a single LongType
column named id
, containing elements in an range from start
to end
(exclusive) with an step value, with partition number specified.
Returns a
DataFrameReader
that can be used to read non-streaming data in as a
DataFrame
.
Returns a DataStreamReader
that can be used to read streaming data in as a DataFrame
.
static void
void
Set the given Spark SQL configuration property.
abstract void
Set Spark SQL configuration properties.
Executes a SQL query using Spark, returning the result as a DataFrame
.
Returns a
StreamingQueryManager
that allows managing all the
StreamingQueries
active on
this
context.
Returns the specified table as a DataFrame
.
Returns the names of tables in the current database as an array.
Returns the names of tables in the given database as an array.
Returns a DataFrame
containing names of existing tables in the current database.
Returns a DataFrame
containing names of existing tables in the given database.
A collection of methods for registering user-defined functions (UDF).
void
Removes the specified table from the in-memory cache.
Methods inherited from interface org.apache.spark.internal.LogginginitializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContext
public static void clearActive()
Loads a Parquet file, returning the result as a
DataFrame
. This function returns an empty
DataFrame
if no paths are passed in.
paths
- (undocumented)
Returns a
SQLContext
as new session, with separated SQL configurations, temporary tables, registered functions, but sharing the same
SparkContext
, cached data and other things.
An interface to register custom QueryExecutionListener that listen for execution metrics.
props
- (undocumented)
key
- (undocumented)
value
- (undocumented)
key
- (undocumented)
Return the value of Spark SQL configuration property for the given key. If the key is not set yet, return
defaultValue
.
key
- (undocumented)
defaultValue
- (undocumented)
()
Returns a
DataFrame
with no rows or columns.
A collection of methods for registering user-defined functions (UDF).
The following example registers a Scala closure as UDF:
sqlContext.udf.register("myUDF", (arg1: Int, arg2: String) => arg2 + arg1)
The following example registers a UDF in Java:
sqlContext.udf().register("myUDF",
(Integer arg1, String arg2) -> arg2 + arg1,
DataTypes.StringType);
(Scala-specific) Implicit methods available in Scala for converting common Scala objects into
DataFrame
s.
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
Returns true if the table is currently cached in-memory.
tableName
- (undocumented)
Caches the specified table in-memory.
tableName
- (undocumented)
Removes the specified table from the in-memory cache.
tableName
- (undocumented)
public void clearCache()
Removes all cached tables from the in-memory cache.
rdd
- (undocumented)
evidence$1
- (undocumented)
data
- (undocumented)
evidence$2
- (undocumented)
Convert a
BaseRelation
created for external data sources into a
DataFrame
.
baseRelation
- (undocumented)
:: DeveloperApi :: Creates a
DataFrame
from an
RDD
containing
Row
s using the given schema. It is important to make sure that the structure of every
Row
of the provided RDD matches the provided schema. Otherwise, there will be runtime exception. Example:
import org.apache.spark.sql._
import org.apache.spark.sql.types._
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
val schema =
StructType(
StructField("name", StringType, false) ::
StructField("age", IntegerType, true) :: Nil)
val people =
sc.textFile("examples/src/main/resources/people.txt").map(
_.split(",")).map(p => Row(p(0), p(1).trim.toInt))
val dataFrame = sqlContext.createDataFrame(people, schema)
dataFrame.printSchema
// root
// |-- name: string (nullable = false)
// |-- age: integer (nullable = true)
dataFrame.createOrReplaceTempView("people")
sqlContext.sql("select name from people").collect.foreach(println)
rowRDD
- (undocumented)
schema
- (undocumented)
Creates a
Dataset
from a local Seq of data of a given type. This method requires an encoder (to convert a JVM object of type
T
to and from the internal Spark SQL representation) that is generally created automatically through implicits from a
SparkSession
, or can be created explicitly by calling static methods on
Encoders
.
==Example==
import spark.implicits._
case class Person(name: String, age: Long)
val data = Seq(Person("Michael", 29), Person("Andy", 30), Person("Justin", 19))
val ds = spark.createDataset(data)
ds.show()
// +-------+---+
// | name|age|
// +-------+---+
// |Michael| 29|
// | Andy| 30|
// | Justin| 19|
// +-------+---+
data
- (undocumented)
evidence$3
- (undocumented)
Creates a
Dataset
from an RDD of a given type. This method requires an encoder (to convert a JVM object of type
T
to and from the internal Spark SQL representation) that is generally created automatically through implicits from a
SparkSession
, or can be created explicitly by calling static methods on
Encoders
.
data
- (undocumented)
evidence$4
- (undocumented)
Creates a
Dataset
from a
JList
of a given type. This method requires an encoder (to convert a JVM object of type
T
to and from the internal Spark SQL representation) that is generally created automatically through implicits from a
SparkSession
, or can be created explicitly by calling static methods on
Encoders
.
==Java Example==
List<String> data = Arrays.asList("hello", "world");
Dataset<String> ds = spark.createDataset(data, Encoders.STRING());
data
- (undocumented)
evidence$5
- (undocumented)
:: DeveloperApi :: Creates a
DataFrame
from a
JavaRDD
containing
Row
s using the given schema. It is important to make sure that the structure of every
Row
of the provided RDD matches the provided schema. Otherwise, there will be runtime exception.
rowRDD
- (undocumented)
schema
- (undocumented)
:: DeveloperApi :: Creates a
DataFrame
from a
JList
containing
Row
s using the given schema. It is important to make sure that the structure of every
Row
of the provided List matches the provided schema. Otherwise, there will be runtime exception.
rows
- (undocumented)
schema
- (undocumented)
WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
rdd
- (undocumented)
beanClass
- (undocumented)
WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
rdd
- (undocumented)
beanClass
- (undocumented)
WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
data
- (undocumented)
beanClass
- (undocumented)
Returns a
DataFrameReader
that can be used to read non-streaming data in as a
DataFrame
.
sqlContext.read.parquet("/path/to/file.parquet")
sqlContext.read.schema(schema).json("/path/to/file.json")
Returns a
DataStreamReader
that can be used to read streaming data in as a
DataFrame
.
sparkSession.readStream.parquet("/path/to/directory/of/parquet/files")
sparkSession.readStream.schema(schema).json("/path/to/directory/of/json/files")
tableName
- (undocumented)
path
- (undocumented)
tableName
- (undocumented)
path
- (undocumented)
source
- (undocumented)
tableName
- (undocumented)
source
- (undocumented)
options
- (undocumented)
tableName
- (undocumented)
source
- (undocumented)
options
- (undocumented)
tableName
- (undocumented)
source
- (undocumented)
schema
- (undocumented)
options
- (undocumented)
tableName
- (undocumented)
source
- (undocumented)
schema
- (undocumented)
options
- (undocumented)
tableName
- the name of the table to be unregistered.
Creates a
DataFrame
with a single
LongType
column named
id
, containing elements in a range from 0 to
end
(exclusive) with step value 1.
end
- (undocumented)
Creates a
DataFrame
with a single
LongType
column named
id
, containing elements in a range from
start
to
end
(exclusive) with step value 1.
start
- (undocumented)
end
- (undocumented)
Creates a
DataFrame
with a single
LongType
column named
id
, containing elements in a range from
start
to
end
(exclusive) with a step value.
start
- (undocumented)
end
- (undocumented)
step
- (undocumented)
Creates a
DataFrame
with a single
LongType
column named
id
, containing elements in an range from
start
to
end
(exclusive) with an step value, with partition number specified.
start
- (undocumented)
end
- (undocumented)
step
- (undocumented)
numPartitions
- (undocumented)
Executes a SQL query using Spark, returning the result as a
DataFrame
. This API eagerly runs DDL/DML commands, but not for SELECT queries.
sqlText
- (undocumented)
Returns the specified table as a
DataFrame
.
tableName
- (undocumented)
Returns a
DataFrame
containing names of existing tables in the current database. The returned DataFrame has three columns, database, tableName and isTemporary (a Boolean indicating if a table is a temporary one or not).
Returns a
DataFrame
containing names of existing tables in the given database. The returned DataFrame has three columns, database, tableName and isTemporary (a Boolean indicating if a table is a temporary one or not).
databaseName
- (undocumented)
Returns a
StreamingQueryManager
that allows managing all the
StreamingQueries
active on
this
context.
databaseName
- (undocumented)
rowRDD
- (undocumented)
schema
- (undocumented)
rowRDD
- (undocumented)
schema
- (undocumented)
rdd
- (undocumented)
beanClass
- (undocumented)
rdd
- (undocumented)
beanClass
- (undocumented)
Loads a Parquet file, returning the result as a
DataFrame
. This function returns an empty
DataFrame
if no paths are passed in.
paths
- (undocumented)
Loads a JSON file (one object per line), returning the result as a
DataFrame
. It goes through the entire dataset once to determine the schema.
path
- (undocumented)
Loads a JSON file (one object per line) and applies the given schema, returning the result as a
DataFrame
.
path
- (undocumented)
schema
- (undocumented)
path
- (undocumented)
samplingRatio
- (undocumented)
Loads an RDD[String] storing JSON objects (one object per record), returning the result as a
DataFrame
. It goes through the entire dataset once to determine the schema.
json
- (undocumented)
Loads an RDD[String] storing JSON objects (one object per record), returning the result as a
DataFrame
. It goes through the entire dataset once to determine the schema.
json
- (undocumented)
Loads an RDD[String] storing JSON objects (one object per record) and applies the given schema, returning the result as a
DataFrame
.
json
- (undocumented)
schema
- (undocumented)
Loads an JavaRDD[String] storing JSON objects (one object per record) and applies the given schema, returning the result as a
DataFrame
.
json
- (undocumented)
schema
- (undocumented)
Loads an RDD[String] storing JSON objects (one object per record) inferring the schema, returning the result as a
DataFrame
.
json
- (undocumented)
samplingRatio
- (undocumented)
Loads a JavaRDD[String] storing JSON objects (one object per record) inferring the schema, returning the result as a
DataFrame
.
json
- (undocumented)
samplingRatio
- (undocumented)
path
- (undocumented)
path
- (undocumented)
source
- (undocumented)
source
- (undocumented)
options
- (undocumented)
source
- (undocumented)
options
- (undocumented)
source
- (undocumented)
schema
- (undocumented)
options
- (undocumented)
source
- (undocumented)
schema
- (undocumented)
options
- (undocumented)
Construct a
DataFrame
representing the database table accessible via JDBC URL url named table.
url
- (undocumented)
table
- (undocumented)
Construct a
DataFrame
representing the database table accessible via JDBC URL url named table. Partitions of the table will be retrieved in parallel based on the parameters passed to this function.
columnName
- the name of a column of integral type that will be used for partitioning.
lowerBound
- the minimum value of columnName
used to decide partition stride
upperBound
- the maximum value of columnName
used to decide partition stride
numPartitions
- the number of partitions. the range minValue
-maxValue
will be split evenly into this many partitions
url
- (undocumented)
table
- (undocumented)
Construct a
DataFrame
representing the database table accessible via JDBC URL url named table. The theParts parameter gives a list expressions suitable for inclusion in WHERE clauses; each one defines one partition of the
DataFrame
.
url
- (undocumented)
table
- (undocumented)
theParts
- (undocumented)
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4