Methods for listing and loading datasets:
Datasets datasets.load_dataset < source >( path: str name: typing.Optional[str] = None data_dir: typing.Optional[str] = None data_files: typing.Union[str, collections.abc.Sequence[str], collections.abc.Mapping[str, typing.Union[str, collections.abc.Sequence[str]]], NoneType] = None split: typing.Union[str, datasets.splits.Split, list[str], list[datasets.splits.Split], NoneType] = None cache_dir: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None download_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = None verification_mode: typing.Union[datasets.utils.info_utils.VerificationMode, str, NoneType] = None keep_in_memory: typing.Optional[bool] = None save_infos: bool = False revision: typing.Union[str, datasets.utils.version.Version, NoneType] = None token: typing.Union[bool, str, NoneType] = None streaming: bool = False num_proc: typing.Optional[int] = None storage_options: typing.Optional[dict] = None **config_kwargs ) → Dataset or DatasetDict
Parameters
str
) — Path or name of the dataset.
if path
is a dataset repository on the HF hub (list all available datasets with huggingface_hub.list_datasets
) -> load the dataset from supported files in the repository (csv, json, parquet, etc.) e.g. 'username/dataset_name'
, a dataset repository on the HF hub containing the data files.
if path
is a local directory -> load the dataset from supported files in the directory (csv, json, parquet, etc.) e.g. './path/to/directory/with/my/csv/data'
.
if path
is the name of a dataset builder and data_files
or data_dir
is specified (available builders are “json”, “csv”, “parquet”, “arrow”, “text”, “xml”, “webdataset”, “imagefolder”, “audiofolder”, “videofolder”) -> load the dataset from the files in data_files
or data_dir
e.g. 'parquet'
.
str
, optional) — Defining the name of the dataset configuration. str
, optional) — Defining the data_dir
of the dataset configuration. If specified for the generic builders (csv, text etc.) or the Hub datasets and data_files
is None
, the behavior is equal to passing os.path.join(data_dir, **)
as data_files
to reference all the files in a directory. str
or Sequence
or Mapping
, optional) — Path(s) to source data file(s). Split
or str
) — Which split of the data to load. If None
, will return a dict
with all splits (typically datasets.Split.TRAIN
and datasets.Split.TEST
). If given, will return a single Dataset. Splits can be combined and specified like in tensorflow-datasets. str
, optional) — Directory to read/write data. Defaults to "~/.cache/huggingface/datasets"
. Features
, optional) — Set the features type to use for this dataset. str
, defaults to REUSE_DATASET_IF_EXISTS
) — Download/generate mode. str
, defaults to BASIC_CHECKS
) — Verification mode determining the checks to run on the downloaded/processed dataset information (checksums/size/splits/…).
Added in 2.9.1
bool
, defaults to None
) — Whether to copy the dataset in-memory. If None
, the dataset will not be copied in-memory unless explicitly enabled by setting datasets.config.IN_MEMORY_MAX_SIZE
to nonzero. See more details in the improve performance section. str
, optional) — Version of the dataset to load. As datasets have their own git repository on the Datasets Hub, the default version “main” corresponds to their “main” branch. You can specify a different version than the default “main” by using a commit SHA or a git tag of the dataset repository. str
or bool
, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True
, or not specified, will get token from "~/.huggingface"
. bool
, defaults to False
) — If set to True
, don’t download the data files. Instead, it streams the data progressively while iterating on the dataset. An IterableDataset or IterableDatasetDict is returned instead in this case.
Note that streaming works for datasets that use data formats that support being iterated over like txt, csv, jsonl for example. Json files may be downloaded completely. Also streaming from remote zip or gzip files is supported but other compressed formats like rar and xz are not yet supported. The tgz format doesn’t allow streaming.
int
, optional, defaults to None
) — Number of processes when downloading and generating the dataset locally. Multiprocessing is disabled by default.
Added in 2.7.0
dict
, optional, defaults to None
) — Experimental. Key/value pairs to be passed on to the dataset file-system backend, if any.
Added in 2.11.0
BuilderConfig
and used in the DatasetBuilder. split
is not None
: the dataset requested,split
is None
, a DatasetDict with each split.or IterableDataset or IterableDatasetDict: if streaming=True
split
is not None
, the dataset is requestedsplit
is None
, a ~datasets.streaming.IterableDatasetDict
with each split.Load a dataset from the Hugging Face Hub, or a local dataset.
You can find the list of datasets on the Hub or with huggingface_hub.list_datasets
.
A dataset is a directory that contains some data files in generic formats (JSON, CSV, Parquet, etc.) and possibly in a generic structure (Webdataset, ImageFolder, AudioFolder, VideoFolder, etc.)
This function does the following under the hood:
Load a dataset builder:
data_files
manually, and which dataset builder to use (e.g. “parquet”).Run the dataset builder:
In the general case:
Download the data files from the dataset if they are not already available locally or cached.
Process and cache the dataset in typed Arrow tables for caching.
Arrow table are arbitrarily long, typed tables which can store nested objects and be mapped to numpy/pandas/python generic types. They can be directly accessed from disk, loaded in RAM or even streamed over the web.
In the streaming case:
Return a dataset built from the requested splits in split
(default: all).
Example:
Load a dataset from the Hugging Face Hub:
>>> from datasets import load_dataset >>> ds = load_dataset('cornell-movie-review-data/rotten_tomatoes', split='train') >>> from datasets import load_dataset >>> ds = load_dataset('nyu-mll/glue', 'sst2', split='train') >>> data_files = {'train': 'train.csv', 'test': 'test.csv'} >>> ds = load_dataset('namespace/your_dataset_name', data_files=data_files) >>> ds = load_dataset('namespace/your_dataset_name', data_dir='folder_name')
Load a local dataset:
>>> from datasets import load_dataset >>> ds = load_dataset('csv', data_files='path/to/local/my_dataset.csv') >>> from datasets import load_dataset >>> ds = load_dataset('json', data_files='path/to/local/my_dataset.json')
Load an IterableDataset:
>>> from datasets import load_dataset >>> ds = load_dataset('cornell-movie-review-data/rotten_tomatoes', split='train', streaming=True)
Load an image dataset with the ImageFolder
dataset builder:
>>> from datasets import load_dataset >>> ds = load_dataset('imagefolder', data_dir='/path/to/images', split='train')datasets.load_from_disk < source >
( dataset_path: typing.Union[str, bytes, os.PathLike] keep_in_memory: typing.Optional[bool] = None storage_options: typing.Optional[dict] = None ) → Dataset or DatasetDict
Parameters
path-like
) — Path (e.g. "dataset/train"
) or remote URI (e.g. "s3://my-bucket/dataset/train"
) of the Dataset or DatasetDict directory where the dataset/dataset-dict will be loaded from. bool
, defaults to None
) — Whether to copy the dataset in-memory. If None
, the dataset will not be copied in-memory unless explicitly enabled by setting datasets.config.IN_MEMORY_MAX_SIZE
to nonzero. See more details in the improve performance section. dict
, optional) — Key/value pairs to be passed on to the file-system backend, if any.
Added in 2.9.0
dataset_path
is a path of a dataset directory: the dataset requested.dataset_path
is a path of a dataset dict directory, a DatasetDict with each split.Loads a dataset that was previously saved using save_to_disk() from a dataset directory, or from a filesystem using any implementation of fsspec.spec.AbstractFileSystem
.
Example:
>>> from datasets import load_from_disk >>> ds = load_from_disk('path/to/dataset/directory')datasets.load_dataset_builder < source >
( path: str name: typing.Optional[str] = None data_dir: typing.Optional[str] = None data_files: typing.Union[str, collections.abc.Sequence[str], collections.abc.Mapping[str, typing.Union[str, collections.abc.Sequence[str]]], NoneType] = None cache_dir: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None download_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = None revision: typing.Union[str, datasets.utils.version.Version, NoneType] = None token: typing.Union[bool, str, NoneType] = None storage_options: typing.Optional[dict] = None **config_kwargs )
Parameters
str
) — Path or name of the dataset.
if path
is a dataset repository on the HF hub (list all available datasets with huggingface_hub.list_datasets
) -> load the dataset builder from supported files in the repository (csv, json, parquet, etc.) e.g. 'username/dataset_name'
, a dataset repository on the HF hub containing the data files.
if path
is a local directory -> load the dataset builder from supported files in the directory (csv, json, parquet, etc.) e.g. './path/to/directory/with/my/csv/data'
.
if path
is the name of a dataset builder and data_files
or data_dir
is specified (available builders are “json”, “csv”, “parquet”, “arrow”, “text”, “xml”, “webdataset”, “imagefolder”, “audiofolder”, “videofolder”) -> load the dataset builder from the files in data_files
or data_dir
e.g. 'parquet'
.
str
, optional) — Defining the name of the dataset configuration. str
, optional) — Defining the data_dir
of the dataset configuration. If specified for the generic builders (csv, text etc.) or the Hub datasets and data_files
is None
, the behavior is equal to passing os.path.join(data_dir, **)
as data_files
to reference all the files in a directory. str
or Sequence
or Mapping
, optional) — Path(s) to source data file(s). str
, optional) — Directory to read/write data. Defaults to "~/.cache/huggingface/datasets"
. str
, defaults to REUSE_DATASET_IF_EXISTS
) — Download/generate mode. str
, optional) — Version of the dataset to load. As datasets have their own git repository on the Datasets Hub, the default version “main” corresponds to their “main” branch. You can specify a different version than the default “main” by using a commit SHA or a git tag of the dataset repository. str
or bool
, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True
, or not specified, will get token from "~/.huggingface"
. dict
, optional, defaults to None
) — Experimental. Key/value pairs to be passed on to the dataset file-system backend, if any.
Added in 2.11.0
Load a dataset builder which can be used to:
You can find the list of datasets on the Hub or with huggingface_hub.list_datasets
.
A dataset is a directory that contains some data files in generic formats (JSON, CSV, Parquet, etc.) and possibly in a generic structure (Webdataset, ImageFolder, AudioFolder, VideoFolder, etc.)
Example:
>>> from datasets import load_dataset_builder >>> ds_builder = load_dataset_builder('cornell-movie-review-data/rotten_tomatoes') >>> ds_builder.info.features {'label': ClassLabel(names=['neg', 'pos']), 'text': Value('string')}datasets.get_dataset_config_names < source >
( path: str revision: typing.Union[str, datasets.utils.version.Version, NoneType] = None download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None download_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = None data_files: typing.Union[str, list, dict, NoneType] = None **download_kwargs )
Parameters
str
) — path to the dataset repository. Can be either:
'./dataset/squad'
huggingface_hub.list_datasets
), e.g. 'rajpurkar/squad'
, 'nyu-mll/glue'
or`'openai/webtext'
Union[str, datasets.Version]
, optional) — If specified, the dataset module will be loaded from the datasets repository at this version. By default:
str
, defaults to REUSE_DATASET_IF_EXISTS
) — Download/generate mode. Union[Dict, List, str]
, optional) — Defining the data_files of the dataset configuration. download_config
if supplied, for example token
. Get the list of available config names for a particular dataset.
Example:
>>> from datasets import get_dataset_config_names >>> get_dataset_config_names("nyu-mll/glue") ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax']datasets.get_dataset_infos < source >
( path: str data_files: typing.Union[str, list, dict, NoneType] = None download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None download_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = None revision: typing.Union[str, datasets.utils.version.Version, NoneType] = None token: typing.Union[bool, str, NoneType] = None **config_kwargs )
Parameters
str
) — path to the dataset repository. Can be either:
'./dataset/squad'
huggingface_hub.list_datasets
), e.g. 'rajpurkar/squad'
, 'nyu-mll/glue'
or`'openai/webtext'
Union[str, datasets.Version]
, optional) — If specified, the dataset module will be loaded from the datasets repository at this version. By default:
str
, defaults to REUSE_DATASET_IF_EXISTS
) — Download/generate mode. Union[Dict, List, str]
, optional) — Defining the data_files of the dataset configuration. str
or bool
, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True
, or not specified, will get token from "~/.huggingface"
. Get the meta information about a dataset, returned as a dict mapping config name to DatasetInfoDict.
Example:
>>> from datasets import get_dataset_infos >>> get_dataset_infos('cornell-movie-review-data/rotten_tomatoes') {'default': DatasetInfo(description="Movie Review Dataset. is a dataset of containing 5,331 positive and 5,331 negative processed ences from Rotten Tomatoes movie reviews...), ...}datasets.get_dataset_split_names < source >
( path: str config_name: typing.Optional[str] = None data_files: typing.Union[str, collections.abc.Sequence[str], collections.abc.Mapping[str, typing.Union[str, collections.abc.Sequence[str]]], NoneType] = None download_config: typing.Optional[datasets.download.download_config.DownloadConfig] = None download_mode: typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = None revision: typing.Union[str, datasets.utils.version.Version, NoneType] = None token: typing.Union[bool, str, NoneType] = None **config_kwargs )
Parameters
str
) — path to the dataset repository. Can be either:
'./dataset/squad'
huggingface_hub.list_datasets
), e.g. 'rajpurkar/squad'
, 'nyu-mll/glue'
or`'openai/webtext'
str
, optional) — Defining the name of the dataset configuration. str
or Sequence
or Mapping
, optional) — Path(s) to source data file(s). str
, defaults to REUSE_DATASET_IF_EXISTS
) — Download/generate mode. str
, optional) — Version of the dataset to load. As datasets have their own git repository on the Datasets Hub, the default version “main” corresponds to their “main” branch. You can specify a different version than the default “main” by using a commit SHA or a git tag of the dataset repository. str
or bool
, optional) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True
, or not specified, will get token from "~/.huggingface"
. Get the list of available splits for a particular config and dataset.
Example:
>>> from datasets import get_dataset_split_names >>> get_dataset_split_names('cornell-movie-review-data/rotten_tomatoes') ['train', 'validation', 'test']From files
Configurations used to load data files. They are used when loading local files or a dataset repository:
load_dataset("parquet", data_dir="path/to/data/dir")
load_dataset("allenai/c4")
You can pass arguments to load_dataset
to configure data loading. For example you can specify the sep
parameter to define the CsvConfig that is used to load the data:
load_dataset("csv", data_dir="path/to/data/dir", sep="\t")Text class datasets.packaged_modules.text.TextConfig < source >
( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None encoding: str = 'utf-8' encoding_errors: typing.Optional[str] = None chunksize: int = 10485760 keep_linebreaks: bool = False sample_by: str = 'line' )
BuilderConfig for text files.
class datasets.packaged_modules.text.Text < source >( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )
CSV class datasets.packaged_modules.csv.CsvConfig < source >( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None sep: str = ',' delimiter: typing.Optional[str] = None header: typing.Union[int, list[int], str, NoneType] = 'infer' names: typing.Optional[list[str]] = None column_names: typing.Optional[list[str]] = None index_col: typing.Union[int, str, list[int], list[str], NoneType] = None usecols: typing.Union[list[int], list[str], NoneType] = None prefix: typing.Optional[str] = None mangle_dupe_cols: bool = True engine: typing.Optional[typing.Literal['c', 'python', 'pyarrow']] = None converters: dict = None true_values: typing.Optional[list] = None false_values: typing.Optional[list] = None skipinitialspace: bool = False skiprows: typing.Union[int, list[int], NoneType] = None nrows: typing.Optional[int] = None na_values: typing.Union[str, list[str], NoneType] = None keep_default_na: bool = True na_filter: bool = True verbose: bool = False skip_blank_lines: bool = True thousands: typing.Optional[str] = None decimal: str = '.' lineterminator: typing.Optional[str] = None quotechar: str = '"' quoting: int = 0 escapechar: typing.Optional[str] = None comment: typing.Optional[str] = None encoding: typing.Optional[str] = None dialect: typing.Optional[str] = None error_bad_lines: bool = True warn_bad_lines: bool = True skipfooter: int = 0 doublequote: bool = True memory_map: bool = False float_precision: typing.Optional[str] = None chunksize: int = 10000 features: typing.Optional[datasets.features.features.Features] = None encoding_errors: typing.Optional[str] = 'strict' on_bad_lines: typing.Literal['error', 'warn', 'skip'] = 'error' date_format: typing.Optional[str] = None )
BuilderConfig for CSV.
class datasets.packaged_modules.csv.Csv < source >( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )
JSON class datasets.packaged_modules.json.JsonConfig < source >( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None encoding: str = 'utf-8' encoding_errors: typing.Optional[str] = None field: typing.Optional[str] = None use_threads: bool = True block_size: typing.Optional[int] = None chunksize: int = 10485760 newlines_in_values: typing.Optional[bool] = None )
BuilderConfig for JSON.
class datasets.packaged_modules.json.Json < source >( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )
XML class datasets.packaged_modules.xml.XmlConfig < source >( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None encoding: str = 'utf-8' encoding_errors: typing.Optional[str] = None )
BuilderConfig for xml files.
class datasets.packaged_modules.xml.Xml < source >( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )
Parquet class datasets.packaged_modules.parquet.ParquetConfig < source >( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None batch_size: typing.Optional[int] = None columns: typing.Optional[list[str]] = None features: typing.Optional[datasets.features.features.Features] = None filters: typing.Union[pyarrow._compute.Expression, list[tuple], list[list[tuple]], NoneType] = None )
BuilderConfig for Parquet.
class datasets.packaged_modules.parquet.Parquet < source >( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )
Arrow class datasets.packaged_modules.arrow.ArrowConfig < source >( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None )
BuilderConfig for Arrow.
class datasets.packaged_modules.arrow.Arrow < source >( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )
SQL class datasets.packaged_modules.sql.SqlConfig < source >( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None sql: typing.Union[str, ForwardRef('sqlalchemy.sql.Selectable')] = None con: typing.Union[str, ForwardRef('sqlalchemy.engine.Connection'), ForwardRef('sqlalchemy.engine.Engine'), ForwardRef('sqlite3.Connection')] = None index_col: typing.Union[str, list[str], NoneType] = None coerce_float: bool = True params: typing.Union[list, tuple, dict, NoneType] = None parse_dates: typing.Union[list, dict, NoneType] = None columns: typing.Optional[list[str]] = None chunksize: typing.Optional[int] = 10000 features: typing.Optional[datasets.features.features.Features] = None )
BuilderConfig for SQL.
class datasets.packaged_modules.sql.Sql < source >( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )
Images class datasets.packaged_modules.imagefolder.ImageFolderConfig < source >( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None drop_labels: bool = None drop_metadata: bool = None metadata_filenames: list = None filters: typing.Union[pyarrow._compute.Expression, list[tuple], list[list[tuple]], NoneType] = None )
BuilderConfig for ImageFolder.
class datasets.packaged_modules.imagefolder.ImageFolder < source >( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )
Audio class datasets.packaged_modules.audiofolder.AudioFolderConfig < source >( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None drop_labels: bool = None drop_metadata: bool = None metadata_filenames: list = None filters: typing.Union[pyarrow._compute.Expression, list[tuple], list[list[tuple]], NoneType] = None )
Builder Config for AudioFolder.
class datasets.packaged_modules.audiofolder.AudioFolder < source >( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )
Videos class datasets.packaged_modules.videofolder.VideoFolderConfig < source >( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None drop_labels: bool = None drop_metadata: bool = None metadata_filenames: list = None filters: typing.Union[pyarrow._compute.Expression, list[tuple], list[list[tuple]], NoneType] = None )
BuilderConfig for ImageFolder.
class datasets.packaged_modules.videofolder.VideoFolder < source >( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )
Pdf class datasets.packaged_modules.pdffolder.PdfFolderConfig < source >( name: str = 'default' version: typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir: typing.Optional[str] = None data_files: typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description: typing.Optional[str] = None features: typing.Optional[datasets.features.features.Features] = None drop_labels: bool = None drop_metadata: bool = None metadata_filenames: list = None filters: typing.Union[pyarrow._compute.Expression, list[tuple], list[list[tuple]], NoneType] = None )
BuilderConfig for ImageFolder.
class datasets.packaged_modules.pdffolder.PdfFolder < source >( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )
WebDataset class datasets.packaged_modules.webdataset.WebDataset < source >( cache_dir: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None hash: typing.Optional[str] = None base_path: typing.Optional[str] = None info: typing.Optional[datasets.info.DatasetInfo] = None features: typing.Optional[datasets.features.features.Features] = None token: typing.Union[bool, str, NoneType] = None repo_id: typing.Optional[str] = None data_files: typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir: typing.Optional[str] = None storage_options: typing.Optional[dict] = None writer_batch_size: typing.Optional[int] = None **config_kwargs )
< > Update on GitHubRetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4