Bases: Value
Bases: ProgrammingError
Bases: Value
Bases: Value
Bases: Value
Bases: pybind11_object
Members:
LINE_FEED
CARRIAGE_RETURN_LINE_FEED
Bases: ProgrammingError
Create a column reference from the provided column name
Bases: OperationalError
Create a constant expression from the provided value
Bases: IntegrityError
Bases: DataError
Bases: DatabaseError
Bases: Value
Bases: Value
Bases: Value
Bases: pybind11_object
Append the passed DataFrame to the named table
Create an array type object of ‘type’
Fetch a result as Arrow table following execute()
Start a new transaction
Synchronizes data in the write-ahead log (WAL) to the database data file (no-op for in-memory connections)
Close the connection
Commit changes performed within a transaction
Create a DuckDB function out of the passing in Python function so it can be used in queries
Create a duplicate of the current connection
Create a decimal type with ‘width’ and ‘scale’
Get result set attributes, mainly column names
Fetch a result as DataFrame following execute()
Create a type object by parsing the ‘type_str’ string
Create a duplicate of the current connection
Create an enum type of underlying ‘type’, consisting of the list of ‘values’
Execute the given SQL query, optionally using prepared statements with parameters set
Execute the given prepared statement multiple times using the list of parameter sets in parameters
Parse the query string and extract the Statement object(s) produced
Fetch a result as Arrow table following execute()
Fetch a result as DataFrame following execute()
Fetch a chunk of the result as DataFrame following execute()
Fetch an Arrow RecordBatchReader following execute()
Fetch all rows from a result following execute
Fetch a result as DataFrame following execute()
Fetch the next set of rows from a result following execute
Fetch a result as list of NumPy arrays following execute
Fetch a single row from a result following execute
Check if a filesystem with the provided name is currently registered
Create a relation object from an Arrow object
Create a relation object from the CSV file in ‘name’
Create a relation object from the DataFrame in df
Overloaded function.
from_parquet(self: duckdb.duckdb.DuckDBPyConnection, file_glob: str, binary_as_string: bool = False, *, file_row_number: bool = False, filename: bool = False, hive_partitioning: bool = False, union_by_name: bool = False, compression: object = None) -> duckdb.duckdb.DuckDBPyRelation
Create a relation object from the Parquet files in file_glob
from_parquet(self: duckdb.duckdb.DuckDBPyConnection, file_globs: list[str], binary_as_string: bool = False, *, file_row_number: bool = False, filename: bool = False, hive_partitioning: bool = False, union_by_name: bool = False, compression: object = None) -> duckdb.duckdb.DuckDBPyRelation
Create a relation object from the Parquet files in file_globs
Run a SQL query. If it is a SELECT statement, create a relation object from the given SQL query, otherwise run the query as-is.
Extract the required table names from a query
Install an extension by name, with an optional version and/or repository to get the extension from
Interrupt pending operations
List registered filesystems, including builtin ones
Create a list type object of ‘type’
Load an installed extension
Create a map type object from ‘key_type’ and ‘value_type’
Fetch a result as Polars DataFrame following execute()
Run a SQL query. If it is a SELECT statement, create a relation object from the given SQL query, otherwise run the query as-is.
Create a relation object from the CSV file in ‘name’
Create a relation object from the JSON file in ‘name’
Overloaded function.
read_parquet(self: duckdb.duckdb.DuckDBPyConnection, file_glob: str, binary_as_string: bool = False, *, file_row_number: bool = False, filename: bool = False, hive_partitioning: bool = False, union_by_name: bool = False, compression: object = None) -> duckdb.duckdb.DuckDBPyRelation
Create a relation object from the Parquet files in file_glob
read_parquet(self: duckdb.duckdb.DuckDBPyConnection, file_globs: list[str], binary_as_string: bool = False, *, file_row_number: bool = False, filename: bool = False, hive_partitioning: bool = False, union_by_name: bool = False, compression: object = None) -> duckdb.duckdb.DuckDBPyRelation
Create a relation object from the Parquet files in file_globs
Register the passed Python Object value for querying with a view
Register a fsspec compliant filesystem
Remove a previously created function
Roll back changes performed within a transaction
Create a struct type object from ‘fields’
Get result set row count
Run a SQL query. If it is a SELECT statement, create a relation object from the given SQL query, otherwise run the query as-is.
Create a type object by parsing the ‘type_str’ string
Create a string type with an optional collation
Create a struct type object from ‘fields’
Create a relation object for the named table
Create a relation object from the named table function with given parameters
Fetch a result as dict of TensorFlow Tensors following execute()
Fetch a result as dict of PyTorch Tensors following execute()
Create a type object by parsing the ‘type_str’ string
Create a union type object from ‘members’
Unregister the view name
Unregister a filesystem
Create a relation object from the passed values
Create a relation object for the named view
Bases: pybind11_object
Compute the aggregate aggr_expr by the optional groups group_expr on the relation
Get the name of the current alias
Returns the first non-null value from a given column
Compute the function of a single column or a list of columns by the optional groups on the relation
Finds the row with the maximum value for a value column and returns the value of that row for an argument column
Finds the row with the minimum value for a value column and returns the value of that row for an argument column
Execute and fetch all rows as an Arrow Table
Computes the average on a given column
Computes the bitwise AND of all bits present in a given column
Computes the bitwise OR of all bits present in a given column
Computes the bitwise XOR of all bits present in a given column
Computes a bitstring with bits set for each distinct value in a given column
Computes the logical AND of all values present in a given column
Computes the logical OR of all values present in a given column
Closes the result
Return a list containing the names of the columns of the relation.
Computes the number of elements present in a given column
Creates a new table named table_name with the contents of the relation object
Creates a view named view_name that refers to the relation object
Create cross/cartesian product of two relational objects
Computes the cumulative distribution within the partition
Computes the dense rank within the partition
Gives basic statistics (e.g., min, max) and if NULL exists for each column of the relation.
Return the description of the result
Execute and fetch all rows as a pandas DataFrame
Retrieve distinct rows from this relation object
Return a list containing the types of the columns of the relation.
Create the set except of this relation object with another relation object in other_rel
Transform the relation into a result set
Computes the average of all values present in a given column using a more accurate floating point summation (Kahan Sum)
Execute and return an Arrow Record Batch Reader that yields all rows
Execute and fetch all rows as an Arrow Table
Execute and fetch a chunk of the rows
Execute and fetch all rows as a list of tuples
Execute and fetch all rows as a pandas DataFrame
Execute and fetch the next set of rows as a list of tuples
Execute and fetch all rows as a Python dict mapping each column to one numpy arrays
Execute and fetch a single row as a tuple
Filter the relation object by the filter in filter_expr
Returns the first value of a given column
Computes the first value within the group or partition
Computes the sum of all values present in a given column using a more accurate floating point summation (Kahan Sum)
Computes the geometric mean over all values present in a given column
Computes the histogram over all values present in a given column
Inserts the given values into the relation
Inserts the relation object into an existing table named table_name
Create the set intersection of this relation object with another relation object in other_rel
Join the relation object with another relation object in other_rel using the join condition expression in join_condition. Types supported are ‘inner’, ‘left’, ‘right’, ‘outer’, ‘semi’ and ‘anti’
Computes the lag within the partition
Returns the last value of a given column
Computes the last value within the group or partition
Computes the lead within the partition
Only retrieve the first n rows from this relation object, starting at offset
Returns a list containing all values present in a given column
Calls the passed function on the relation
Returns the maximum value present in a given column
Computes the average on a given column
Computes the median over all values present in a given column
Returns the minimum value present in a given column
Computes the mode over all values present in a given column
Divides the partition as equally as possible into num_buckets
Computes the nth value within the partition
Reorder the relation object by order_expr
Computes the relative rank within the partition
Execute and fetch all rows as a Polars DataFrame
Returns the product of all values present in a given column
Project the relation object by the projection in project_expr
Computes the exact quantile value for a given column
Computes the interpolated quantile value for a given column
Computes the exact quantile value for a given column
Run the given SQL query in sql_query on the view named virtual_table_name that refers to the relation object
Computes the rank within the partition
Computes the dense rank within the partition
Execute and return an Arrow Record Batch Reader that yields all rows
Computes the row number within the partition
Project the relation object by the projection in project_expr
Select columns from the relation, by filtering based on type(s)
Select columns from the relation, by filtering based on type(s)
Rename the relation object to new alias
Tuple of # of rows, # of columns in relation.
Display a summary of the data
Reorder the relation object by the provided expressions
Get the SQL query that is equivalent to the relation
Computes the sample standard deviation for a given column
Computes the sample standard deviation for a given column
Computes the population standard deviation for a given column
Computes the sample standard deviation for a given column
Concatenates the values present in a given column with a separator
Computes the sum of all values present in a given column
Fetch a result as dict of TensorFlow Tensors
Execute and fetch all rows as an Arrow Table
Write the relation object to a CSV file in ‘file_name’
Execute and fetch all rows as a pandas DataFrame
Write the relation object to a Parquet file in ‘file_name’
Creates a new table named table_name with the contents of the relation object
Creates a view named view_name that refers to the relation object
Fetch a result as dict of PyTorch Tensors
Get the type of the relation.
Return a list containing the types of the columns of the relation.
Create the set union of this relation object with another relation object in other_rel
Returns the distinct values in a column.
Update the given relation with the provided expressions
Computes the number of elements present in a given column, also projecting the original column
Computes the sample variance for a given column
Computes the population variance for a given column
Computes the sample variance for a given column
Computes the sample variance for a given column
Write the relation object to a CSV file in ‘file_name’
Write the relation object to a Parquet file in ‘file_name’
Bases: Exception
Bases: pybind11_object
Members:
QUERY_RESULT
CHANGED_ROWS
NOTHING
Bases: pybind11_object
Members:
STANDARD
ANALYZE
Bases: pybind11_object
Create a copy of this expression with the given alias.
name: The alias to use for the expression, this will affect how it can be referenced.
Expression: self with an alias.
Set the order by modifier to ASCENDING.
Create a CastExpression to type from self
type: The type to cast to
CastExpression: self::type
Set the order by modifier to DESCENDING.
Return the stringified version of the expression.
str: The string representation.
Return an IN expression comparing self to the input arguments.
DuckDBPyExpression: The compare IN expression
Return a NOT IN expression comparing self to the input arguments.
DuckDBPyExpression: The compare NOT IN expression
Create a binary IS NOT NULL expression from self
DuckDBPyExpression: self IS NOT NULL
Create a binary IS NULL expression from self
DuckDBPyExpression: self IS NULL
Set the NULL order by modifier to NULLS FIRST.
Set the NULL order by modifier to NULLS LAST.
Add an ELSE <value> clause to the CaseExpression.
value: The value to use if none of the WHEN conditions are met.
CaseExpression: self with an ELSE clause.
Print the stringified version of the expression.
Add an additional WHEN <condition> THEN <value> clause to the CaseExpression.
condition: The condition that must be met. value: The value to use if the condition is met.
CaseExpression: self with an additional WHEN clause.
Bases: DatabaseError
Bases: Value
Bases: IOException
Thrown when an error occurs in the httpfs extension, or whilst downloading an extension.
Bases: Value
Bases: OperationalError
Bases: Value
Bases: DatabaseError
Bases: DatabaseError
Bases: InternalError
Bases: DatabaseError
Bases: Value
Bases: ProgrammingError
Bases: ProgrammingError
Bases: Value
Bases: NotSupportedError
Bases: DatabaseError
Bases: Value
Bases: DatabaseError
Bases: OperationalError
Bases: DataError
Bases: ProgrammingError
Bases: DatabaseError
Bases: DatabaseError
Bases: pybind11_object
Members:
DEFAULT
RETURN_NULL
Bases: pybind11_object
Members:
ROWS
COLUMNS
Bases: DatabaseError
Bases: OperationalError
Bases: Value
Overloaded function.
StarExpression(*, exclude: object = None) -> duckdb.duckdb.Expression
StarExpression() -> duckdb.duckdb.Expression
Bases: pybind11_object
Get the expected type of result produced by this statement, actual type may vary depending on the statement.
Get the map of named parameters this statement has.
Get the query equivalent to this statement.
Get the type of the statement.
Bases: pybind11_object
Members:
INVALID
SELECT
INSERT
UPDATE
CREATE
DELETE
PREPARE
EXECUTE
ALTER
TRANSACTION
COPY
ANALYZE
VARIABLE_SET
CREATE_FUNC
EXPLAIN
DROP
EXPORT
PRAGMA
VACUUM
CALL
SET
LOAD
RELATION
EXTENSION
LOGICAL_PLAN
ATTACH
DETACH
MULTI
COPY_DATABASE
Bases: Value
Bases: ProgrammingError
Bases: Value
Bases: Value
Bases: Value
Bases: Value
Bases: Value
Bases: Value
Bases: Value
Bases: OperationalError
Bases: DataError
Bases: Value
Bases: Value
Bases: Value
Bases: Value
Bases: Value
Bases: object
Bases: Exception
Compute the aggregate aggr_expr by the optional groups group_expr on the relation
Rename the relation object to new alias
Append the passed DataFrame to the named table
Create an array type object of ‘type’
Overloaded function.
arrow(rows_per_batch: int = 1000000, *, connection: duckdb.DuckDBPyConnection = None) -> pyarrow.lib.Table
Fetch a result as Arrow table following execute()
arrow(rows_per_batch: int = 1000000, *, connection: duckdb.DuckDBPyConnection = None) -> pyarrow.lib.Table
Fetch a result as Arrow table following execute()
arrow(arrow_object: object, *, connection: duckdb.DuckDBPyConnection = None) -> duckdb.duckdb.DuckDBPyRelation
Create a relation object from an Arrow object
Start a new transaction
Synchronizes data in the write-ahead log (WAL) to the database data file (no-op for in-memory connections)
Close the connection
Commit changes performed within a transaction
Create a DuckDB database instance. Can take a database file name to read/write persistent data and a read_only flag if no changes are desired
Create a DuckDB function out of the passing in Python function so it can be used in queries
Create a duplicate of the current connection
Create a decimal type with ‘width’ and ‘scale’
Retrieve the connection currently registered as the default to be used by the module
Get result set attributes, mainly column names
Overloaded function.
df(*, date_as_object: bool = False, connection: duckdb.DuckDBPyConnection = None) -> pandas.DataFrame
Fetch a result as DataFrame following execute()
df(*, date_as_object: bool = False, connection: duckdb.DuckDBPyConnection = None) -> pandas.DataFrame
Fetch a result as DataFrame following execute()
df(df: pandas.DataFrame, *, connection: duckdb.DuckDBPyConnection = None) -> duckdb.duckdb.DuckDBPyRelation
Create a relation object from the DataFrame df
Retrieve distinct rows from this relation object
Create a type object by parsing the ‘type_str’ string
Create a duplicate of the current connection
Create an enum type of underlying ‘type’, consisting of the list of ‘values’
Execute the given SQL query, optionally using prepared statements with parameters set
Execute the given prepared statement multiple times using the list of parameter sets in parameters
Parse the query string and extract the Statement object(s) produced
Fetch a result as Arrow table following execute()
Fetch a result as DataFrame following execute()
Fetch a chunk of the result as DataFrame following execute()
Fetch an Arrow RecordBatchReader following execute()
Fetch all rows from a result following execute
Fetch a result as DataFrame following execute()
Fetch the next set of rows from a result following execute
Fetch a result as list of NumPy arrays following execute
Fetch a single row from a result following execute
Check if a filesystem with the provided name is currently registered
Filter the relation object by the filter in filter_expr
Create a relation object from an Arrow object
Create a relation object from the CSV file in ‘name’
Create a relation object from the DataFrame in df
Overloaded function.
from_parquet(file_glob: str, binary_as_string: bool = False, *, file_row_number: bool = False, filename: bool = False, hive_partitioning: bool = False, union_by_name: bool = False, compression: object = None, connection: duckdb.DuckDBPyConnection = None) -> duckdb.duckdb.DuckDBPyRelation
Create a relation object from the Parquet files in file_glob
from_parquet(file_globs: list[str], binary_as_string: bool = False, *, file_row_number: bool = False, filename: bool = False, hive_partitioning: bool = False, union_by_name: bool = False, compression: object = None, connection: duckdb.DuckDBPyConnection = None) -> duckdb.duckdb.DuckDBPyRelation
Create a relation object from the Parquet files in file_globs
Run a SQL query. If it is a SELECT statement, create a relation object from the given SQL query, otherwise run the query as-is.
Extract the required table names from a query
Install an extension by name, with an optional version and/or repository to get the extension from
Interrupt pending operations
Only retrieve the first n rows from this relation object, starting at offset
List registered filesystems, including builtin ones
Create a list type object of ‘type’
Load an installed extension
Create a map type object from ‘key_type’ and ‘value_type’
Reorder the relation object by order_expr
Fetch a result as Polars DataFrame following execute()
Project the relation object by the projection in project_expr
Run a SQL query. If it is a SELECT statement, create a relation object from the given SQL query, otherwise run the query as-is.
Run the given SQL query in sql_query on the view named virtual_table_name that refers to the relation object
Create a relation object from the CSV file in ‘name’
Create a relation object from the JSON file in ‘name’
Overloaded function.
read_parquet(file_glob: str, binary_as_string: bool = False, *, file_row_number: bool = False, filename: bool = False, hive_partitioning: bool = False, union_by_name: bool = False, compression: object = None, connection: duckdb.DuckDBPyConnection = None) -> duckdb.duckdb.DuckDBPyRelation
Create a relation object from the Parquet files in file_glob
read_parquet(file_globs: list[str], binary_as_string: bool = False, *, file_row_number: bool = False, filename: bool = False, hive_partitioning: bool = False, union_by_name: bool = False, compression: object = None, connection: duckdb.DuckDBPyConnection = None) -> duckdb.duckdb.DuckDBPyRelation
Create a relation object from the Parquet files in file_globs
Register the passed Python Object value for querying with a view
Register a fsspec compliant filesystem
Remove a previously created function
Roll back changes performed within a transaction
Create a struct type object from ‘fields’
Get result set row count
Register the provided connection as the default to be used by the module
Run a SQL query. If it is a SELECT statement, create a relation object from the given SQL query, otherwise run the query as-is.
Create a type object by parsing the ‘type_str’ string
Create a string type with an optional collation
Create a struct type object from ‘fields’
Create a relation object for the named table
Create a relation object from the named table function with given parameters
Fetch a result as dict of TensorFlow Tensors following execute()
Bases: pybind11_object
Members:
identifier
numeric_const
string_const
operator
keyword
comment
Tokenizes a SQL string, returning a list of (position, type) tuples that can be used for e.g., syntax highlighting
Fetch a result as dict of PyTorch Tensors following execute()
Create a type object by parsing the ‘type_str’ string
Create a union type object from ‘members’
Unregister the view name
Unregister a filesystem
Create a relation object from the passed values
Create a relation object for the named view
Write the relation object to a CSV file in ‘file_name’
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4