Convert the object to a JSON string.
Note NaNâs and None will be converted to null and datetime objects will be converted to UNIX timestamps.
String, path object (implementing os.PathLike[str]), or file-like object implementing a write() function. If None, the result is returned as a string.
Indication of expected JSON string format.
Series:
default is âindexâ
allowed values are: {âsplitâ, ârecordsâ, âindexâ, âtableâ}.
DataFrame:
default is âcolumnsâ
allowed values are: {âsplitâ, ârecordsâ, âindexâ, âcolumnsâ, âvaluesâ, âtableâ}.
The format of the JSON string:
âsplitâ : dict like {âindexâ -> [index], âcolumnsâ -> [columns], âdataâ -> [values]}
ârecordsâ : list like [{column -> value}, ⦠, {column -> value}]
âindexâ : dict like {index -> {column -> value}}
âcolumnsâ : dict like {column -> {index -> value}}
âvaluesâ : just the values array
âtableâ : dict like {âschemaâ: {schema}, âdataâ: {data}}
Describing the data, where data component is like
orient='records'
.
Type of date conversion. âepochâ = epoch milliseconds, âisoâ = ISO8601. The default depends on the orient. For orient='table'
, the default is âisoâ. For all other orients, the default is âepochâ.
The number of decimal places to use when encoding floating point values. The possible maximal value is 15. Passing double_precision greater than 15 will raise a ValueError.
Force encoded string to be ASCII.
The time unit to encode to, governs timestamp and ISO8601 precision. One of âsâ, âmsâ, âusâ, ânsâ for second, millisecond, microsecond, and nanosecond respectively.
Handler to call if object cannot otherwise be converted to a suitable format for JSON. Should receive a single argument which is the object to convert and return a serialisable object.
If âorientâ is ârecordsâ write out line-delimited json format. Will throw ValueError if incorrect âorientâ since others are not list-like.
For on-the-fly compression of the output data. If âinferâ and âpath_or_bufâ is path-like, then detect compression from the following extensions: â.gzâ, â.bz2â, â.zipâ, â.xzâ, â.zstâ, â.tarâ, â.tar.gzâ, â.tar.xzâ or â.tar.bz2â (otherwise no compression). Set to None
for no compression. Can also be a dict with key 'method'
set to one of {'zip'
, 'gzip'
, 'bz2'
, 'zstd'
, 'xz'
, 'tar'
} and other key-value pairs are forwarded to zipfile.ZipFile
, gzip.GzipFile
, bz2.BZ2File
, zstandard.ZstdCompressor
, lzma.LZMAFile
or tarfile.TarFile
, respectively. As an example, the following could be passed for faster compression and to create a reproducible gzip archive: compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}
.
Added in version 1.5.0: Added support for .tar files.
Changed in version 1.4.0: Zstandard support.
The index is only used when âorientâ is âsplitâ, âindexâ, âcolumnâ, or âtableâ. Of these, âindexâ and âcolumnâ do not support index=False.
Length of whitespace used to indent each record.
Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib.request.Request
as header options. For other URLs (e.g. starting with âs3://â, and âgcs://â) the key-value pairs are forwarded to fsspec.open
. Please see fsspec
and urllib
for more details, and for more examples on storage options refer here.
Specify the IO mode for output when supplying a path_or_buf. Accepted args are âwâ (writing) and âaâ (append) only. mode=âaâ is only supported when lines is True and orient is ârecordsâ.
If path_or_buf is None, returns the resulting json format as a string. Otherwise returns None.
See also
read_json
Convert a JSON string to pandas object.
Notes
The behavior of indent=0
varies from the stdlib, which does not indent the output but does insert newlines. Currently, indent=0
and the default indent=None
are equivalent in pandas, though this may change in a future release.
orient='table'
contains a âpandas_versionâ field under âschemaâ. This stores the version of pandas used in the latest revision of the schema.
Examples
>>> from json import loads, dumps >>> df = pd.DataFrame( ... [["a", "b"], ["c", "d"]], ... index=["row 1", "row 2"], ... columns=["col 1", "col 2"], ... )
>>> result = df.to_json(orient="split") >>> parsed = loads(result) >>> dumps(parsed, indent=4) { "columns": [ "col 1", "col 2" ], "index": [ "row 1", "row 2" ], "data": [ [ "a", "b" ], [ "c", "d" ] ] }
Encoding/decoding a Dataframe using 'records'
formatted JSON. Note that index labels are not preserved with this encoding.
>>> result = df.to_json(orient="records") >>> parsed = loads(result) >>> dumps(parsed, indent=4) [ { "col 1": "a", "col 2": "b" }, { "col 1": "c", "col 2": "d" } ]
Encoding/decoding a Dataframe using 'index'
formatted JSON:
>>> result = df.to_json(orient="index") >>> parsed = loads(result) >>> dumps(parsed, indent=4) { "row 1": { "col 1": "a", "col 2": "b" }, "row 2": { "col 1": "c", "col 2": "d" } }
Encoding/decoding a Dataframe using 'columns'
formatted JSON:
>>> result = df.to_json(orient="columns") >>> parsed = loads(result) >>> dumps(parsed, indent=4) { "col 1": { "row 1": "a", "row 2": "c" }, "col 2": { "row 1": "b", "row 2": "d" } }
Encoding/decoding a Dataframe using 'values'
formatted JSON:
>>> result = df.to_json(orient="values") >>> parsed = loads(result) >>> dumps(parsed, indent=4) [ [ "a", "b" ], [ "c", "d" ] ]
Encoding with Table Schema:
>>> result = df.to_json(orient="table") >>> parsed = loads(result) >>> dumps(parsed, indent=4) { "schema": { "fields": [ { "name": "index", "type": "string" }, { "name": "col 1", "type": "string" }, { "name": "col 2", "type": "string" } ], "primaryKey": [ "index" ], "pandas_version": "1.4.0" }, "data": [ { "index": "row 1", "col 1": "a", "col 2": "b" }, { "index": "row 2", "col 1": "c", "col 2": "d" } ] }
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4