GitXplorerGitXplorer
g

learning-to-learn

public
4062 stars
601 forks
15 issues

Commits

List of commits on branch master.
Unverified
f3c1a8d176b8ea7cc60478bfcfdd10a7a52fd296

Merge pull request #19 from Vooban/master

ssergomezcol committed 7 years ago
Unverified
61cfc4ca78a70b8e04ca86651cf5a9c601f3dabb

Sonnet's base AbstractModule now requires named arguments. See: https://github.com/deepmind/sonnet/commit/601c4f393037ea76c625d39c88d9d576f438c7ae

gguillaume-chevalier committed 8 years ago
Unverified
a465d375941ce43e6fd3f55ec068756de70de226

Merge pull request #14 from choas/master

ssergomezcol committed 8 years ago
Unverified
ba7c69ecba8130b5470826b4ecd1b7478c25bff1

Change imports to use sonnet; same as most other files

cchoas committed 8 years ago
Unverified
7ebea07bfff38a7cc42174dcceb6735f71fa73fb

Change imports to use sonnet

ssergomezcol committed 8 years ago
Unverified
caa1448559d70adb05f4da8bcc2eee893d93a09d

Python 3 compatibility

ssergomezcol committed 8 years ago

README

The README file for this repository.

Learning to Learn in TensorFlow

Dependencies

Training

python train.py --problem=mnist --save_path=./mnist

Command-line flags:

  • save_path: If present, the optimizer will be saved to the specified path every time the evaluation performance is improved.
  • num_epochs: Number of training epochs.
  • log_period: Epochs before mean performance and time is reported.
  • evaluation_period: Epochs before the optimizer is evaluated.
  • evaluation_epochs: Number of evaluation epochs.
  • problem: Problem to train on. See Problems section below.
  • num_steps: Number of optimization steps.
  • unroll_length: Number of unroll steps for the optimizer.
  • learning_rate: Learning rate.
  • second_derivatives: If true, the optimizer will try to compute second derivatives through the loss function specified by the problem.

Evaluation

python evaluate.py --problem=mnist --optimizer=L2L --path=./mnist

Command-line flags:

  • optimizer: Adam or L2L.
  • path: Path to saved optimizer, only relevant if using the L2L optimizer.
  • learning_rate: Learning rate, only relevant if using Adam optimizer.
  • num_epochs: Number of evaluation epochs.
  • seed: Seed for random number generation.
  • problem: Problem to evaluate on. See Problems section below.
  • num_steps: Number of optimization steps.

Problems

The training and evaluation scripts support the following problems (see util.py for more details):

  • simple: One-variable quadratic function.
  • simple-multi: Two-variable quadratic function, where one of the variables is optimized using a learned optimizer and the other one using Adam.
  • quadratic: Batched ten-variable quadratic function.
  • mnist: Mnist classification using a two-layer fully connected network.
  • cifar: Cifar10 classification using a convolutional neural network.
  • cifar-multi: Cifar10 classification using a convolutional neural network, where two independent learned optimizers are used. One to optimize parameters from convolutional layers and the other one for parameters from fully connected layers.

New problems can be implemented very easily. You can see in train.py that the meta_minimize method from the MetaOptimizer class is given a function that returns the TensorFlow operation that generates the loss function we want to minimize (see problems.py for an example).

It's important that all operations with Python side effects (e.g. queue creation) must be done outside of the function passed to meta_minimize. The cifar10 function in problems.py is a good example of a loss function that uses TensorFlow queues.

Disclaimer: This is not an official Google product.