GitXplorerGitXplorer
k

RAdam-tensorflow

public
4 stars
1 forks
0 issues

Commits

List of commits on branch master.
Unverified
ed889cadf1fdf6fc7128e98df9073fbb28527e56

update: readme

kkozistr committed 5 years ago
Unverified
0d93142fdedc3724308b9569084ddc8031b177c1

refactor: some lines

kkozistr committed 5 years ago
Unverified
473160b5b2d72e2bab175695441ee2331bc876e0

add: mnist results

kkozistr committed 5 years ago
Unverified
afe2245850ce2a0d327fa12ad58a605d7298df99

add: RAdam benchmark result on MNIST test set

kkozistr committed 5 years ago
Unverified
1e3b4d46fe25095ff11409ef5bf6ea17b807512f

feat: impl amsgrad

kkozistr committed 5 years ago
Unverified
3f8fbc5a07f8a778ae9640196112207dd6109ca9

add: gitignore

kkozistr committed 5 years ago

README

The README file for this repository.

RAdam in Tensorflow

On The Variance Of The Adaptive Learning Rate And Beyond in Tensorflow

This repo is based on pytorch impl repo

Total alerts Language grade: Python

Explanation

The learning rate warm-up for Adam is a must-have trick for stable training in certain situations (or eps tuning). But the underlying mechanism is largely unknown. In our study, we suggest one fundamental cause is the large variance of the adaptive learning rates, and provide both theoretical and empirical support evidence.

In addition to explaining why we should use warm-up, we also propose RAdam, a theoretically sound variant of Adam.

Requirements

  • Python 3.x
  • Tensorflow 1.x (maybe 2.x)

Usage

# learning rate can be either a scalar or a tensor

# use exclude_from_weight_decay feature, 
# if you wanna selectively disable updating weight-decayed weights

from radam import RAdamOptimizer

optimizer = RAdamOptimizer(
    learning_rate=0.001,
    beta1=0.9,
    beta2=0.999,
    epsilon=1e-6,
    decay=0.,
    warmup_proportion= 0.1,
    weight_decay=0.,
    exclude_from_weight_decay=['...'],
    amsgrad=False,
)

You can simply test the optimizers on MNIST Dataset w/ below model!

For RAdam optimizer,

python3 mnist_test --optimizer "radam"

To Do

  • impl warmup stage

Results

Testing Accuracy & Loss among the optimizers on the several data sets w/ under same condition.

MNIST DataSet

acc

Optimizer Test Acc Time Etc
RAdam 97.80% 2m 9s
Adam 97.68% 1m 45s
AdaGrad 90.14% 1m 38s
SGD 87.86% 1m 39s
Momentum 87.86% 1m 39s w/ nestrov

% tested on GTX 1060 6GB

Citation

@article{liu2019radam,
  title={On the Variance of the Adaptive Learning Rate and Beyond},
  author={Liu, Liyuan and Jiang, Haoming and He, Pengcheng and Chen, Weizhu and Liu, Xiaodong and Gao, Jianfeng and Han, Jiawei},
  journal={arXiv preprint arXiv:1908.03265},
  year={2019}
}

Author

Hyeongchan Kim / kozistr