Skip to content

tinylibs/tinybench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Tinybench πŸ”Ž

CI NPM version Discord neostandard Javascript Code Style

Benchmark your code easily with Tinybench, a simple, tiny and light-weight 10KB (2KB minified and gzipped) benchmarking library! You can run your benchmarks in multiple JavaScript runtimes, Tinybench is completely based on the Web APIs with proper timing using process.hrtime or performance.now.

  • Accurate and precise timing based on the environment
  • Statistically analyzed latency and throughput values: standard deviation, margin of error, variance, percentiles, etc.
  • Concurrency support
  • Event and EventTarget compatible events
  • No dependencies

In case you need more tiny libraries like tinypool or tinyspy, please consider submitting an RFC

Installing

$ npm install -D tinybench

Usage

You can start benchmarking by instantiating the Bench class and adding benchmark tasks to it.

import { Bench } from 'tinybench'

const bench = new Bench({ name: 'simple benchmark', time: 100 })

bench
  .add('faster task', () => {
    console.log('I am faster')
  })
  .add('slower task', async () => {
    await new Promise(resolve => setTimeout(resolve, 1)) // we wait 1ms :)
    console.log('I am slower')
  })

await bench.run()

console.log(bench.name)
console.table(bench.table())

// Output:
// simple benchmark
// β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
// β”‚ (index) β”‚ Task name     β”‚ Latency avg (ns)  β”‚ Latency med (ns)      β”‚ Throughput avg (ops/s) β”‚ Throughput med (ops/s) β”‚ Samples β”‚
// β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
// β”‚ 0       β”‚ 'faster task' β”‚ '63768 Β± 4.02%'   β”‚ '58954 Β± 15255.00'    β”‚ '18562 Β± 1.67%'        β”‚ '16962 Β± 4849'         β”‚ 1569    β”‚
// β”‚ 1       β”‚ 'slower task' β”‚ '1542543 Β± 7.14%' β”‚ '1652502 Β± 167851.00' β”‚ '808 Β± 19.65%'         β”‚ '605 Β± 67'             β”‚ 65      β”‚
// β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The add method accepts a task name and a task function, so it can benchmark it! This method returns a reference to the Bench instance, so it's possible to use it to create an another task for that instance.

Note that the task name should always be unique in an instance, because Tinybench stores the tasks based on their names in a Map.

Also note that tinybench does not log any result by default. You can extract the relevant stats from bench.tasks or any other API after running the benchmark, and process them however you want.

More usage examples can be found in the examples directory.

Docs

Events

Both the Task and Bench classes extend the EventTarget object. So you can attach listeners to different types of events in each class instance using the universal addEventListener and removeEventListener methods.

// runs on each benchmark task's cycle
bench.addEventListener('cycle', (evt) => {
  const task = evt.task!;
});
// runs only on this benchmark task's cycle
task.addEventListener('cycle', (evt) => {
  const task = evt.task!;
});

process.hrtime

if you want more accurate results for nodejs with process.hrtime, then import the hrtimeNow function from the library and pass it to the Bench options.

import { hrtimeNow } from 'tinybench'

It may make your benchmarks slower.

Concurrency

  • When mode is set to null (default), concurrency is disabled.
  • When mode is set to 'task', each task's iterations (calls of a task function) run concurrently.
  • When mode is set to 'bench', different tasks within the bench run concurrently. Concurrent cycles.
bench.threshold = 10 // The maximum number of concurrent tasks to run. Defaults to Number.POSITIVE_INFINITY.
bench.concurrency = 'task' // The concurrency mode to determine how tasks are run.
await bench.run()

Aborting Benchmarks

Tinybench supports aborting benchmarks using AbortSignal at both the bench and task levels:

Bench-level Abort

Abort all tasks in a benchmark by passing a signal to the Bench constructor:

const controller = new AbortController()

const bench = new Bench({ signal: controller.signal })

bench
  .add('task1', () => {
    // This will be aborted
  })
  .add('task2', () => {
    // This will also be aborted
  })

// Abort all tasks
controller.abort()

await bench.run()
// Both tasks will be aborted

Task-level Abort

Abort individual tasks without affecting other tasks by passing a signal to the task options:

const controller = new AbortController()

const bench = new Bench()

bench
  .add('abortable task', () => {
    // This task can be aborted independently
  }, { signal: controller.signal })
  .add('normal task', () => {
    // This task will continue normally
  })

// Abort only the first task
controller.abort()

await bench.run()
// Only 'abortable task' will be aborted, 'normal task' continues

Abort During Execution

You can abort benchmarks while they're running:

const controller = new AbortController()

const bench = new Bench({ time: 10000 }) // Long-running benchmark

bench.add('long task', async () => {
  await new Promise(resolve => setTimeout(resolve, 100))
}, { signal: controller.signal })

// Abort after 1 second
setTimeout(() => controller.abort(), 1000)

await bench.run()
// Task will stop after ~1 second instead of running for 10 seconds

Abort Events

Both Bench and Task emit abort events when aborted:

const controller = new AbortController()
const bench = new Bench()

bench.add('task', () => {
  // Task function
}, { signal: controller.signal })

const task = bench.getTask('task')

// Listen for abort events
task.addEventListener('abort', () => {
  console.log('Task aborted!')
})

bench.addEventListener('abort', () => {
  console.log('Bench received abort event!')
})

controller.abort()
await bench.run()

Note: When a task is aborted, task.result.aborted will be true, and the task will have completed any iterations that were running when the abort signal was received.

Prior art

Authors


Mohammad Bagher

Credits


Uzlopak

poyoho

Contributing

Feel free to create issues/discussions and then PRs for the project!

Sponsors

Your sponsorship can make a huge difference in continuing our work in open source!

About

πŸ”Ž A simple, tiny and lightweight benchmarking library!

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project

Packages

No packages published

Contributors 32