GitXplorerGitXplorer
j

paper-benchmark

public
3 stars
0 forks
1 issues

Commits

List of commits on branch master.
Unverified
ac1fe89cc7b2ddea8e423fb31cf9746d1d90d595

README: point to experiment code and data

jjiahao committed 8 years ago
Unverified
97ff0ca865bb11b9e8277b738be71559dc652508

Update readme

jjiahao committed 8 years ago
Unverified
fcafa800138ec2d84d36c55645ba225d0977420d

Does including graphicx fix Travis failure?

jjiahao committed 8 years ago
Unverified
75256abedbf6fab0c807110f17260a66e550a1c4

Change back to published title

jjiahao committed 8 years ago
Unverified
fd12d198912f5aca3a490e08992e8aa244b1aa2b

Another round of final revisions

jjiahao committed 8 years ago
Unverified
19819937bcc2236c10d56983ccacbf0de5432315

better wording for Kalibera explanation

jjrevels committed 8 years ago

README

The README file for this repository.

Robust benchmarking in noisy environments

A paper by Jiahao Chen and Jarrett Revels, Julia Labs, MIT CSAIL, to be published in the Proceedings of the 20th Annual IEEE High Performance Extreme Computing Conference (HPEC 2016)

Build Status

Abstract

We propose a benchmarking strategy that is robust in the presence of timer error, OS jitter and other environmental fluctuations, and is insensitive to the highly nonideal statistics produced by timing measurements. We construct a model that explains how these strongly nonideal statistics can arise from environmental fluctuations, and also justifies our proposed strategy. We implement this strategy in the BenchmarkTools Julia package, where it is used in production continuous integration (CI) pipelines for developing the Julia language and its ecosystem.

Code and data

The main benchmarking code is available from the BenchmarkTools Julia package, v0.0.3. The specific code used to run these experiments and the data generated on our test machine is available from the experiments directory in this repository.