Read SQL query into a DataFrame.
Returns a DataFrame corresponding to the result set of the query string. Optionally provide an index_col parameter to use one of the columns as the index, otherwise default integer index will be used.
SQL query to be executed.
Using SQLAlchemy makes it possible to use any DB supported by that library. If a DBAPI2 object, only sqlite3 is supported.
Column(s) to set as index(MultiIndex).
Attempts to convert values of non-string, non-numeric objects (like decimal.Decimal) to floating point. Useful for SQL result sets.
List of parameters to pass to execute method. The syntax used to pass parameters is database driver dependent. Check your database driver documentation for which of the five syntax styles, described in PEP 249âs paramstyle, is supported. Eg. for psycopg2, uses %(name)s so use params={ânameâ : âvalueâ}.
List of column names to parse as dates.
Dict of {column_name: format string}
where format string is strftime compatible in case of parsing string times, or is one of (D, s, ns, ms, us) in case of parsing integer timestamps.
Dict of {column_name: arg dict}
, where the arg dict corresponds to the keyword arguments of pandas.to_datetime()
Especially useful with databases without native Datetime support, such as SQLite.
If specified, return an iterator where chunksize is the number of rows to include in each chunk.
Data type for data or columns. E.g. np.float64 or {âaâ: np.float64, âbâ: np.int32, âcâ: âInt64â}.
Added in version 1.3.0.
Back-end data type applied to the resultant DataFrame
(still experimental). Behaviour is as follows:
"numpy_nullable"
: returns nullable-dtype-backed DataFrame
(default).
"pyarrow"
: returns pyarrow-backed nullable ArrowDtype
DataFrame.
Added in version 2.0.
See also
read_sql_table
Read SQL database table into a DataFrame.
read_sql
Read SQL query or database table into a DataFrame.
Notes
Any datetime values with time zone information parsed via the parse_dates parameter will be converted to UTC.
Examples
>>> from sqlalchemy import create_engine >>> engine = create_engine("sqlite:///database.db") >>> with engine.connect() as conn, conn.begin(): ... data = pd.read_sql_table("data", conn)
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4