Scikit-learn_bench is a benchmark tool for libraries and frameworks implementing Scikit-learn-like APIs and other workloads.
Benefits:
How to create a usable Python environment with the following required frameworks:
# with pip pip install -r envs/requirements-sklearn.txt # or with conda conda env create -n sklearn -f envs/conda-env-sklearn.yml
conda env create -n rapids --solver=libmamba -f envs/conda-env-rapids.yml🚀 How To Use Scikit-learn_bench
How to run benchmarks using the sklbench
module and a specific configuration:
python -m sklbench --config configs/sklearn_example.json
The default output is a file with JSON-formatted results of benchmarking cases. To generate a better human-readable report, use the following command:
python -m sklbench --config configs/sklearn_example.json --report
By default, output and report file paths are result.json
and report.xlsx
. To specify custom file paths, run:
python -m sklbench --config configs/sklearn_example.json --report --result-file result_example.json --report-file report_example.xlsx
For a description of all benchmarks runner arguments, refer to documentation.
To combine raw result files gathered from different environments, call the report generator:
python -m sklbench.report --result-files result_1.json result_2.json --report-file report_example.xlsx
For a description of all report generator arguments, refer to documentation.
Scikit-learn_bench High-Level Workflowflowchart TB A[User] -- High-level arguments --> B[Benchmarks runner] B -- Generated benchmarking cases --> C["Benchmarks collection"] C -- Raw JSON-formatted results --> D[Report generator] D -- Human-readable report --> A classDef userStyle fill:#44b,color:white,stroke-width:2px,stroke:white; class A userStyleLoading
Scikit-learn_bench supports the following types of benchmarks:
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4