Bases: _Weakrefable
Collection of data fragments and potentially child datasets.
Arrow Datasets allow you to query against data that has been split across multiple files. This sharding of data may indicate partitioning, which can accelerate queries that only touch some partitions (files).
Methods
Attributes
Count rows matching the scanner filter.
Expression
, default None
Scan will return only the rows matching the filter. If possible the predicate will be pushed down to exploit the partition information or internal metadata found in the data source, e.g. Parquet statistics. Otherwise filters the loaded RecordBatches before yielding them.
int
, default 131_072
The maximum row count for scanned record batches. If scanned record batches are overflowing memory then this method can be called to reduce their size.
int
, default 16
The number of batches to read ahead in a file. This might not work for all file formats. Increasing this number will increase RAM usage but could also improve IO utilization.
int
, default 4
The number of files to read ahead. Increasing this number will increase RAM usage but could also improve IO utilization.
FragmentScanOptions
, default None
Options specific to a particular scan and fragment type, which can change between different scans of the same dataset.
True
If enabled, then maximum parallelism will be used determined by the number of available CPU cores.
True
If enabled, metadata may be cached when scanning to speed up repeated scans.
MemoryPool
, default None
For memory allocations, if required. If not specified, uses the default pool.
int
Apply a row filter to the dataset.
Expression
The filter that should be applied to the dataset.
Dataset
Returns an iterator over the fragments in this dataset.
Expression
, default None
Return fragments matching the optional filter, either using the partition_expression or internal information like Parquetâs statistics.
Fragment
Load the first N rows of the dataset.
int
The number of rows to load.
list
of str
, default None
The columns to project. This can be a list of column names to include (order and duplicates will be preserved), or a dictionary with {new_column_name: expression} values for more advanced projections.
The list of columns or expressions may use the special fields __batch_index (the index of the batch within the fragment), __fragment_index (the index of the fragment within the dataset), __last_in_fragment (whether the batch is last in fragment), and __filename (the name of the source file or a description of the source fragment).
The columns will be passed down to Datasets and corresponding data fragments to avoid loading, copying, and deserializing columns that will not be required further down the compute chain. By default all of the available columns are projected. Raises an exception if any of the referenced column names does not exist in the datasetâs Schema.
Expression
, default None
Scan will return only the rows matching the filter. If possible the predicate will be pushed down to exploit the partition information or internal metadata found in the data source, e.g. Parquet statistics. Otherwise filters the loaded RecordBatches before yielding them.
int
, default 131_072
The maximum row count for scanned record batches. If scanned record batches are overflowing memory then this method can be called to reduce their size.
int
, default 16
The number of batches to read ahead in a file. This might not work for all file formats. Increasing this number will increase RAM usage but could also improve IO utilization.
int
, default 4
The number of files to read ahead. Increasing this number will increase RAM usage but could also improve IO utilization.
FragmentScanOptions
, default None
Options specific to a particular scan and fragment type, which can change between different scans of the same dataset.
True
If enabled, then maximum parallelism will be used determined by the number of available CPU cores.
True
If enabled, metadata may be cached when scanning to speed up repeated scans.
MemoryPool
, default None
For memory allocations, if required. If not specified, uses the default pool.
Table
Perform a join between this dataset and another one.
Result of the join will be a new dataset, where further operations can be applied.
The dataset to join to the current one, acting as the right dataset in the join operation.
str
or list
[str
]
The columns from current dataset that should be used as keys of the join operation left side.
str
or list
[str
], default None
The columns from the right_dataset that should be used as keys on the join operation right side. When None
use the same key names as the left dataset.
str
, default âleft outerâ
The kind of join that should be performed, one of (âleft semiâ, âright semiâ, âleft antiâ, âright antiâ, âinnerâ, âleft outerâ, âright outerâ, âfull outerâ)
str
, default None
Which suffix to add to right column names. This prevents confusion when the columns in left and right datasets have colliding names.
str
, default None
Which suffix to add to the left column names. This prevents confusion when the columns in left and right datasets have colliding names.
True
If the duplicated keys should be omitted from one of the sides in the join result.
True
Whenever to use multithreading or not.
InMemoryDataset
Perform an asof join between this dataset and another one.
This is similar to a left-join except that we match on nearest key rather than equal keys. Both datasets must be sorted by the key. This type of join is most useful for time series data that are not perfectly aligned.
Optionally match on equivalent keys with âbyâ before searching with âonâ.
Result of the join will be a new Dataset, where further operations can be applied.
The dataset to join to the current one, acting as the right dataset in the join operation.
str
The column from current dataset that should be used as the âonâ key of the join operation left side.
An inexact match is used on the âonâ key, i.e. a row is considered a match if and only if left_on - tolerance <= right_on <= left_on.
The input table must be sorted by the âonâ key. Must be a single field of a common type.
Currently, the âonâ key must be an integer, date, or timestamp type.
str
or list
[str
]
The columns from current dataset that should be used as the keys of the join operation left side. The join operation is then done only for the matches in these columns.
int
The tolerance for inexact âonâ key matching. A right row is considered a match with the left row right.on - left.on <= tolerance. The tolerance may be:
negative, in which case a past-as-of-join occurs;
or positive, in which case a future-as-of-join occurs;
or zero, in which case an exact-as-of-join occurs.
The tolerance is interpreted in the same units as the âonâ key.
str
or list
[str
], default None
The columns from the right_dataset that should be used as the on key on the join operation right side. When None
use the same key name as the left dataset.
str
or list
[str
], default None
The columns from the right_dataset that should be used as by keys on the join operation right side. When None
use the same key names as the left dataset.
InMemoryDataset
An Expression which evaluates to true for all data viewed by this Dataset.
Return a copy of this Dataset with a different schema.
The copy will view the same Fragments. If the new schema is not compatible with the original datasetâs schema then an error will be raised.
Schema
The new dataset schema.
Build a scan operation against the dataset.
Data is not loaded immediately. Instead, this produces a Scanner, which exposes further operations (e.g. loading all data as a table, counting rows).
See the Scanner.from_dataset()
method for further information.
list
of str
, default None
The columns to project. This can be a list of column names to include (order and duplicates will be preserved), or a dictionary with {new_column_name: expression} values for more advanced projections.
The list of columns or expressions may use the special fields __batch_index (the index of the batch within the fragment), __fragment_index (the index of the fragment within the dataset), __last_in_fragment (whether the batch is last in fragment), and __filename (the name of the source file or a description of the source fragment).
The columns will be passed down to Datasets and corresponding data fragments to avoid loading, copying, and deserializing columns that will not be required further down the compute chain. By default all of the available columns are projected. Raises an exception if any of the referenced column names does not exist in the datasetâs Schema.
Expression
, default None
Scan will return only the rows matching the filter. If possible the predicate will be pushed down to exploit the partition information or internal metadata found in the data source, e.g. Parquet statistics. Otherwise filters the loaded RecordBatches before yielding them.
int
, default 131_072
The maximum row count for scanned record batches. If scanned record batches are overflowing memory then this method can be called to reduce their size.
int
, default 16
The number of batches to read ahead in a file. This might not work for all file formats. Increasing this number will increase RAM usage but could also improve IO utilization.
int
, default 4
The number of files to read ahead. Increasing this number will increase RAM usage but could also improve IO utilization.
FragmentScanOptions
, default None
Options specific to a particular scan and fragment type, which can change between different scans of the same dataset.
True
If enabled, then maximum parallelism will be used determined by the number of available CPU cores.
True
If enabled, metadata may be cached when scanning to speed up repeated scans.
MemoryPool
, default None
For memory allocations, if required. If not specified, uses the default pool.
Scanner
Examples
>>> import pyarrow as pa >>> table = pa.table({'year': [2020, 2022, 2021, 2022, 2019, 2021], ... 'n_legs': [2, 2, 4, 4, 5, 100], ... 'animal': ["Flamingo", "Parrot", "Dog", "Horse", ... "Brittle stars", "Centipede"]}) >>> >>> import pyarrow.parquet as pq >>> pq.write_table(table, "dataset_scanner.parquet")
>>> import pyarrow.dataset as ds >>> dataset = ds.dataset("dataset_scanner.parquet")
Selecting a subset of the columns:
>>> dataset.scanner(columns=["year", "n_legs"]).to_table() pyarrow.Table year: int64 n_legs: int64 ---- year: [[2020,2022,2021,2022,2019,2021]] n_legs: [[2,2,4,4,5,100]]
Projecting selected columns using an expression:
>>> dataset.scanner(columns={ ... "n_legs_uint": ds.field("n_legs").cast("uint8"), ... }).to_table() pyarrow.Table n_legs_uint: uint8 ---- n_legs_uint: [[2,2,4,4,5,100]]
Filtering rows while scanning:
>>> dataset.scanner(filter=ds.field("year") > 2020).to_table() pyarrow.Table year: int64 n_legs: int64 animal: string ---- year: [[2022,2021,2022,2021]] n_legs: [[2,4,4,100]] animal: [["Parrot","Dog","Horse","Centipede"]]
The common schema of the full Dataset
Sort the Dataset by one or multiple columns.
str
or list
[tuple
(name
, order
)]
Name of the column to use to sort (ascending), or a list of multiple sorting conditions where each entry is a tuple with column name and sorting order (âascendingâ or âdescendingâ)
dict
, optional
Additional sorting options. As allowed by SortOptions
InMemoryDataset
A new dataset sorted according to the sort keys.
Select rows of data by index.
Array
or array-like
indices of rows to select in the dataset.
list
of str
, default None
The columns to project. This can be a list of column names to include (order and duplicates will be preserved), or a dictionary with {new_column_name: expression} values for more advanced projections.
The list of columns or expressions may use the special fields __batch_index (the index of the batch within the fragment), __fragment_index (the index of the fragment within the dataset), __last_in_fragment (whether the batch is last in fragment), and __filename (the name of the source file or a description of the source fragment).
The columns will be passed down to Datasets and corresponding data fragments to avoid loading, copying, and deserializing columns that will not be required further down the compute chain. By default all of the available columns are projected. Raises an exception if any of the referenced column names does not exist in the datasetâs Schema.
Expression
, default None
Scan will return only the rows matching the filter. If possible the predicate will be pushed down to exploit the partition information or internal metadata found in the data source, e.g. Parquet statistics. Otherwise filters the loaded RecordBatches before yielding them.
int
, default 131_072
The maximum row count for scanned record batches. If scanned record batches are overflowing memory then this method can be called to reduce their size.
int
, default 16
The number of batches to read ahead in a file. This might not work for all file formats. Increasing this number will increase RAM usage but could also improve IO utilization.
int
, default 4
The number of files to read ahead. Increasing this number will increase RAM usage but could also improve IO utilization.
FragmentScanOptions
, default None
Options specific to a particular scan and fragment type, which can change between different scans of the same dataset.
True
If enabled, then maximum parallelism will be used determined by the number of available CPU cores.
True
If enabled, metadata may be cached when scanning to speed up repeated scans.
MemoryPool
, default None
For memory allocations, if required. If not specified, uses the default pool.
Table
Read the dataset as materialized record batches.
list
of str
, default None
The columns to project. This can be a list of column names to include (order and duplicates will be preserved), or a dictionary with {new_column_name: expression} values for more advanced projections.
The list of columns or expressions may use the special fields __batch_index (the index of the batch within the fragment), __fragment_index (the index of the fragment within the dataset), __last_in_fragment (whether the batch is last in fragment), and __filename (the name of the source file or a description of the source fragment).
The columns will be passed down to Datasets and corresponding data fragments to avoid loading, copying, and deserializing columns that will not be required further down the compute chain. By default all of the available columns are projected. Raises an exception if any of the referenced column names does not exist in the datasetâs Schema.
Expression
, default None
Scan will return only the rows matching the filter. If possible the predicate will be pushed down to exploit the partition information or internal metadata found in the data source, e.g. Parquet statistics. Otherwise filters the loaded RecordBatches before yielding them.
int
, default 131_072
The maximum row count for scanned record batches. If scanned record batches are overflowing memory then this method can be called to reduce their size.
int
, default 16
The number of batches to read ahead in a file. This might not work for all file formats. Increasing this number will increase RAM usage but could also improve IO utilization.
int
, default 4
The number of files to read ahead. Increasing this number will increase RAM usage but could also improve IO utilization.
FragmentScanOptions
, default None
Options specific to a particular scan and fragment type, which can change between different scans of the same dataset.
True
If enabled, then maximum parallelism will be used determined by the number of available CPU cores.
True
If enabled, metadata may be cached when scanning to speed up repeated scans.
MemoryPool
, default None
For memory allocations, if required. If not specified, uses the default pool.
RecordBatch
Read the dataset to an Arrow table.
Note that this method reads all the selected data from the dataset into memory.
list
of str
, default None
The columns to project. This can be a list of column names to include (order and duplicates will be preserved), or a dictionary with {new_column_name: expression} values for more advanced projections.
The list of columns or expressions may use the special fields __batch_index (the index of the batch within the fragment), __fragment_index (the index of the fragment within the dataset), __last_in_fragment (whether the batch is last in fragment), and __filename (the name of the source file or a description of the source fragment).
The columns will be passed down to Datasets and corresponding data fragments to avoid loading, copying, and deserializing columns that will not be required further down the compute chain. By default all of the available columns are projected. Raises an exception if any of the referenced column names does not exist in the datasetâs Schema.
Expression
, default None
Scan will return only the rows matching the filter. If possible the predicate will be pushed down to exploit the partition information or internal metadata found in the data source, e.g. Parquet statistics. Otherwise filters the loaded RecordBatches before yielding them.
int
, default 131_072
The maximum row count for scanned record batches. If scanned record batches are overflowing memory then this method can be called to reduce their size.
int
, default 16
The number of batches to read ahead in a file. This might not work for all file formats. Increasing this number will increase RAM usage but could also improve IO utilization.
int
, default 4
The number of files to read ahead. Increasing this number will increase RAM usage but could also improve IO utilization.
FragmentScanOptions
, default None
Options specific to a particular scan and fragment type, which can change between different scans of the same dataset.
True
If enabled, then maximum parallelism will be used determined by the number of available CPU cores.
True
If enabled, metadata may be cached when scanning to speed up repeated scans.
MemoryPool
, default None
For memory allocations, if required. If not specified, uses the default pool.
Table
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4