GitXplorerGitXplorer
o

ai-and-efficiency

public
54 stars
26 forks
0 issues

Commits

List of commits on branch master.
Verified
1f280d11be9d3ef2b21f4f2317786233bda288ab

Merge pull request #2 from nottombrown/patch-1

DDannyHernandez committed 4 years ago
Verified
5121782c2f1edee0e43b6fa4a5b5e2aca5fc9763

Update README.md

nnottombrown committed 4 years ago
Unverified
821ac8458a279dd5cc0fa3288a30fdaf7f45c0a4

updated links and answer for what is a tf/s-day

committed 4 years ago
Unverified
18dfb776f5b3df836ca52903ba5ad5eb3738e66f

updated citation

committed 4 years ago
Unverified
2cb4d81cda4f90d7a1037229b9b61eda456ab3a5

capitalization

committed 4 years ago
Unverified
4899cb130028b1b23d1fc78c1b4a9e3ee7510db2

capitalization

committed 4 years ago

README

The README file for this repository.

Algorithmic Efficiency SOTA Submissions

We found that in 2019 it took 44x less compute to train a neural net to AlexNet-level performance than in 2012. (Moore’s Law would have only yielded an 11x change in cost over this period).

Going forward, we're going to use this git repository to help publicly track state of the art (SOTA) algorithmic efficiency. We're beginning by tracking training efficiency SOTA's in image recognition and translation at two levels.

AlexNet-level performance

79.1% top 5 accuracy on ImageNet

Publication Compute(tfs-s/days) Reduction Factor Analysis Date
AlexNet 3.1 1 AI and Efficiency 6/1/2012
GoogLeNet 0.71 4.3 AI and Efficiency 9/17/2014
MobileNet 0.28 11 AI and Efficiency 4/17/2017
ShuffeNet 0.15 21 AI and Efficiency 7/3/2017
ShuffleNet_v2 0.12 25 AI and Efficiency 6/30/2018
EfficientNet 0.069 44 EfficientNet 5/28/2019

ResNet-50-level performance

92.9% top 5 accuracy on ImageNet

Publication Compute(tfs-s/days) Reduction Factor Analysis Date
ResNet-50 17 1 AI and Efficiency 1/10/2015
EfficientNet 0.75 10 EfficientNet 5/28/2019

Seq2Seq-level Performance

34.8 BLEU on WMT-14 EN-FR

Publication Compute(tfs-s/days) Reduction Factor Analysis Date
Seq2Seq (Ensemble) 465 1 AI and Compute 1/10/2014
Transformer(Base) 8 61 Attention is all you need 1/12/2017

GNMT-level performance

39.92 BLEU on WMT-14 EN-FR

Publication Compute(tfs-s/days) Reduction Factor Analysis Date
GNMT 1620 1 Attention is all you need 1/26/2016
Transformer (Big) 181 9 Attention is all you need 1/12/2017

In order to make an entry please submit a pull request in which you:

  1. Make the appropriate update to efficiency_sota.csv
  2. Make the appropriate update to the tables in this file, README.MD
  3. Add the relevant calculations/supporting information to the analysis folder. To get examples of calculations please see AI and Compute and Appendix A and B in Measuring the Algorithmic Efficiency of Neural Networks.

FAQ

  1. We're interested in tracking progress on additional benchmarks that have been of interest for many years and continue to be of interest. Please send thoughts or analysis on such benchmarks to danny@openai.com.
  2. ImageNet is the only training data source allowed for the vision benchmark. No human captioning, other images, or other data is allowed. Automated augmentation is ok.
  3. We currently place no restrictions on training data used for translation, but may split results by appropriate categories in the future.
  4. A tf-s/day equals a teraflop/s worth of compute run a day.

To cite this work please use the following bibtex entry.

@misc{hernandez2020efficiency
    title = {Measuring the Algorithmic Efficiency of Neural Networks},
    author = {Danny Hernandez, Tom B. Brown},
    year = {2020},
    eprint={2005.04305},
    archivePrefix={arXiv},
    primaryClass={cs.LG},
}