GitXplorerGitXplorer
S

nodejs-microbenchmarks

public
0 stars
0 forks
0 issues

Commits

List of commits on branch main.
Verified
1aab40749ccd5329aa9a5ea019df0ef7d44d232c

feat: add another algorithm to is digit

SSethFalco committed 4 months ago
Verified
c4ed831d854b0d079640be2f8cfbeeec40e270f3

chore: rename project to nodejs-microbenchmarks

SSethFalco committed 4 months ago
Verified
619709e2aa3f22f86b265e8cc10fbc6ac5c2342f

feat: add benchmark to create array of sequential numbers

SSethFalco committed 4 months ago
Verified
e0c9b64c7b89feedf7f0da19271ef0fd535bdf84

chore: order algorithms from fastest to slowest based on single run

SSethFalco committed 4 months ago
Verified
51d289a4d8c08f40bad39e7472fdd6cbe4a6fc3b

docs: add how to run benchmark section

SSethFalco committed 7 months ago
Verified
a5e73ac2a1282bd47359f23afbb87290c8bbd00b

initial commit

SSethFalco committed 7 months ago

README

The README file for this repository.

Node.js Microbenchmarks

Just some benchmarks I've written during development both professionally and during open-source contributions. The repository is for me to scaffold benchmarks quickly, and to refer back to benchmark results later.

While the repository is public, the purpose is to share why I made certain decisions. This is not a collaborative effort to publish and maintain benchmarks together. If you're able to pitch a better solution to a problem covered in the repository, feel free to share it! However, pull requests adding benchmarks for new problems won't be accepted.

Running Benchmarks

Install npm dependencies with:

npm i

Then run the relevant benchmark with Node.js:

BENCHMARK=is-string-whitespace
node src/benchmarks/$BENCHMARK.js

Methodology

All tests cases are constructed the same way and use the same options.

For input, instead of testing a single set of arguments, we test an array of arguments. This is so we can get what is generally most performant, rather than what is most performant for a specific scenario.

The data array may need to be tweaked depending on the data you expect to encounter in the real-world. For example, two solutions could be correct, but perform differently based on the input received. Sometimes the solution that's generally slower, is better because the input it performs faster on is what you'll encounter 99% of the time in production.

For most benchmarks, we enforce that all functions must have identical input/output. However, there are a few exceptions due to quirks like floating point precision. Cases like these will have warnings documented at the top of the file, and you'll need to strike a balance between performance and precision on a case-by-case basis.