Tired of slow, inaccurate, or overly complex benchmarking tools? @pawel-up/benchmark
is a modern, lightweight, and highly accurate benchmarking library designed for JavaScript and TypeScript. It provides everything you need to measure the performance of your code with confidence.
Why Choose @pawel-up/benchmark
?
@pawel-up/benchmark
uses advanced techniques like warm-up iterations, adaptive inner iterations, and outlier removal to ensure highly accurate and reliable results.This library is designed to be a lean and powerful core for benchmarking. Integrations for CLI, file output, and other features are intended to be built on top of this core.
ops
- Operations per second.rme
- Relative Margin of Error (RME).me
- Margin of error.stddev
- Sample standard deviation.mean
- Sample arithmetic mean.sample
- The sample of execution of times.sem
- The standard error of the mean.variance
- The sample variance.size
- Sample size.cohensd
- Cohen's d effect size.sed
- The standard error of the difference in means.dmedian
- The difference between the sample medians of the two benchmark runs.pmedian
- The percentage difference between the sample medians of the two benchmark runs.Installation:
npm install @pawel-up/benchmark # or yarn add @pawel-up/benchmark
Basic Usage (Single Benchmark):
import { Benchmarker } from '@pawel-up/benchmark'; // Your function to benchmark async function myAsyncFunction() { // ... your code ... await new Promise(resolve => setTimeout(resolve, 10)); } async function main() { const benchmarker = new Benchmarker('My Async Benchmark', myAsyncFunction, { maxIterations: 100, maxExecutionTime: 5000, }); await benchmarker.run(); const report = benchmarker.getReport(); console.log(report); } main(); // Note: This example uses `console.log` for demonstration purposes. The core library does not include any built-in reporters.
Using Benchmark Suites:
import { Suite } from '@pawel-up/benchmark'; // Your functions to benchmark function myFunction1() { // ... your code ... } function myFunction2() { // ... your code ... } async function main() { const suite = new Suite('My Benchmark Suite', { maxExecutionTime: 10000 }); suite.setSetup(async () => { console.log('Running setup function...'); // Do some setup work here... await new Promise(resolve => setTimeout(resolve, 1000)); // Example async setup console.log('Setup function completed.'); }); suite.setup(); suite.add('Function 1', myFunction1); suite.setup(); suite.add('Function 2', myFunction2); await suite.run(); const report = suite.getReport(); console.log(report); } main(); // Note: This example uses `console.log` for demonstration purposes. The core library does not include any built-in reporters.
Using compareFunction:
import { compareFunction, SuiteReport } from '@pawel-up/benchmark'; import * as fs from 'fs/promises'; async function main() { // Load suite reports from files (example) const suiteReport1 = JSON.parse(await fs.readFile('suite_report_1.json', 'utf-8')) as SuiteReport; const suiteReport2 = JSON.parse(await fs.readFile('suite_report_2.json', 'utf-8')) as SuiteReport; const suiteReport3 = JSON.parse(await fs.readFile('suite_report_3.json', 'utf-8')) as SuiteReport; const suiteReport4 = JSON.parse(await fs.readFile('suite_report_4.json', 'utf-8')) as SuiteReport; const suiteReports = [suiteReport1, suiteReport2, suiteReport3, suiteReport4]; // Example 1: Compare with JSON output compareFunction('myFunction', suiteReports, { format: 'json' }); // Example 2: Compare with CSV output compareFunction('myFunction', suiteReports, { format: 'csv' }); // Example 3: Compare with default table output compareFunction('myFunction', suiteReports); } main().catch(console.error); // Note: This example uses `console.log` for demonstration purposes. The core library does not include any built-in reporters.
@pawel-up/benchmark
goes beyond simple timing measurements. It leverages statistical methods to provide a more accurate and meaningful assessment of function performance. Here's why this approach is crucial:
By using a statistical approach, @pawel-up/benchmark
helps you make data-driven decisions about your code's performance, leading to more effective optimizations and a better understanding of your library's behavior.
new Benchmarker(name: string, fn: () => unknown | Promise<unknown>, options?: BenchmarkOptions)
Benchmarker
instance.name
: The name of the benchmark.fn
: The function to benchmark (can be synchronous or asynchronous).options
: An optional BenchmarkOptions
object to configure the benchmark.async run(): Promise<void>
getReport(): BenchmarkReport
BenchmarkReport
object with the benchmark results.new Suite(name: string, options?: BenchmarkOptions)
Suite
instance.name
: The name of the suite.options
: An optional BenchmarkOptions
object to configure the suite.add(name: string, fn: () => unknown | Promise<unknown>): this
name
: The name of the benchmark.fn
: The function to benchmark.addReporter(reporter: Reporter, timing: ReporterExecutionTiming): this
reporter
: The reporter instance.timing
: When the reporter should be executed ('after-each'
or 'after-all'
).setSetup(fn: () => unknown | Promise<unknown>): this
fn
: The setup function.setup(): this
async run(): Promise<SuiteReport>
getReport(): SuiteReport
SuiteReport
object with the suite results.async run(report: BenchmarkReport | SuiteReport): Promise<void>
BenchmarkOptions
Interface
maxExecutionTime?: number
warmupIterations?: number
innerIterations?: number
maxInnerIterations?: number
timeThreshold?: number
minsize?: number
maxIterations?: number
debug?: boolean
logLevel?: number
BenchmarkReport
Interface
kind: 'benchmark'
name: string
ops: number
- Operations per Secondrme: number
- Relative Margin of Error (RME)stddev: number
- Sample Standard Deviationmean: number
- Sample Arithmetic Meanme: number
- Margin of errorsample: number[]
- The sample of execution of timessem: number
- The standard error of the mean.variance: number
- The sample variancesize: number
- Sample sizedate: string
kind: 'suite'
name: string
date: string
results: BenchmarkReport[]
Contributions are welcome! Please see the contributing guidelines for more information.
This project is licensed under the MIT License.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4