Write the contained data to an HDF5 file using HDFStore.
Hierarchical Data Format (HDF) is self-describing, allowing an application to interpret the structure and contents of a file with no outside information. One HDF file can hold a mix of related objects which can be accessed as a group or as individual objects.
In order to add another DataFrame or Series to an existing HDF file please use append mode and a different a key.
Warning
One can store a subclass of DataFrame
or Series
to HDF5, but the type of the subclass is lost upon storing.
For more information see the user guide.
File path or HDFStore object.
Identifier for the group in the store.
Mode to open file:
âwâ: write, a new file is created (an existing file with the same name would be deleted).
âaâ: append, an existing file is opened for reading and writing, and if the file does not exist it is created.
âr+â: similar to âaâ, but the file must already exist.
Specifies a compression level for data. A value of 0 or None disables compression.
Specifies the compression library to be used. These additional compressors for Blosc are supported (default if no compressor specified: âblosc:blosclzâ): {âblosc:blosclzâ, âblosc:lz4â, âblosc:lz4hcâ, âblosc:snappyâ, âblosc:zlibâ, âblosc:zstdâ}. Specifying a compression library which is not available issues a ValueError.
For Table formats, append the input data to the existing.
Possible values:
âfixedâ: Fixed format. Fast writing/reading. Not-appendable, nor searchable.
âtableâ: Table format. Write as a PyTables Table structure which may perform worse but allow more flexible operations like searching / selecting subsets of the data.
If None, pd.get_option(âio.hdf.default_formatâ) is checked, followed by fallback to âfixedâ.
Write DataFrame index as a column.
Map column names to minimum string sizes for columns.
How to represent null values as str. Not allowed with append=True.
Remove missing values.
List of columns to create as indexed data columns for on-disk queries, or True to use all columns. By default only the axes of the object are indexed. See Query via data columns. for more information. Applicable only to format=âtableâ.
Specifies how encoding and decoding errors are to be handled. See the errors argument for open()
for a full list of options.
Examples
>>> df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}, ... index=['a', 'b', 'c']) >>> df.to_hdf('data.h5', key='df', mode='w')
We can add another object to the same file:
>>> s = pd.Series([1, 2, 3, 4]) >>> s.to_hdf('data.h5', key='s')
Reading from HDF file:
>>> pd.read_hdf('data.h5', 'df') A B a 1 4 b 2 5 c 3 6 >>> pd.read_hdf('data.h5', 's') 0 1 1 2 2 3 3 4 dtype: int64
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4