GitXplorerGitXplorer
j

paper-benchmark

public
3 stars
0 forks
1 issues

Commits

List of commits on branch master.
Unverified
cea42f5a391e4c0c9235e11e2ba58af4061c4cea

Add copyright release form

jjiahao committed 8 years ago
Unverified
73415d616994ca110d3d072bdb3da06b4c563678

Add @alanedelman as author

jjiahao committed 8 years ago
Unverified
68e44d2ef9822873eec3624b2ff63a3ea639af17

Merge pull request #7 from jiahao/jr/wording

jjiahao committed 8 years ago
Unverified
f7e34d707dabc6e70129bbaba99ecbcdb6f26d42

more appropriate wording

jjrevels committed 8 years ago
Unverified
be89de5654469e06a741029bd16b1099c48b66d6

grammar fix

jjrevels committed 8 years ago
Unverified
fa49b3e4c508a511da480ce33d411889128b3a5b

Update Makefile for arXiv submission

jjiahao committed 8 years ago

README

The README file for this repository.

Robust benchmarking in noisy environments

A paper by Jiahao Chen and Jarrett Revels, Julia Labs, MIT CSAIL, to be published in the Proceedings of the 20th Annual IEEE High Performance Extreme Computing Conference (HPEC 2016)

Build Status

Abstract

We propose a benchmarking strategy that is robust in the presence of timer error, OS jitter and other environmental fluctuations, and is insensitive to the highly nonideal statistics produced by timing measurements. We construct a model that explains how these strongly nonideal statistics can arise from environmental fluctuations, and also justifies our proposed strategy. We implement this strategy in the BenchmarkTools Julia package, where it is used in production continuous integration (CI) pipelines for developing the Julia language and its ecosystem.

Code and data

The main benchmarking code is available from the BenchmarkTools Julia package, v0.0.3. The specific code used to run these experiments and the data generated on our test machine is available from the experiments directory in this repository.