GitXplorerGitXplorer
o

pixel-cnn

public
1922 stars
437 forks
38 issues

Commits

List of commits on branch master.
Unverified
bbc15688dd37934a12c2759cf2b34975e15901d9

first commit

committed 5 years ago
Verified
677967a4c474e76cc7c1dfbe5e2d5f661f82cfe8

Merge pull request #43 from openai/atpaino-apache-2-license

jjachiam committed 6 years ago
Unverified
9c2603b6a59be2deae0f2d2985fde669c859bfac

Add Apache License 2.0

aatpaino committed 6 years ago
Verified
ac8b1eb1703737a9664555182ce35264f8c6f88c

Merge pull request #37 from bfs18/master

yyburda committed 6 years ago
Verified
ca2a876fe2c8e08b17d5d1428059a1c54e87d490

Merge pull request #40 from christopherhesse/update-readme

cchristopherhesse committed 6 years ago
Unverified
fbabdd40cd4f966216b28f5ab7e0165864eb117d

update README with repo status

cchristopherhesse committed 6 years ago

README

The README file for this repository.

Status: Archive (code is provided as-is, no updates expected)

pixel-cnn++

This is a Python3 / Tensorflow implementation of PixelCNN++, as described in the following paper:

PixelCNN++: A PixelCNN Implementation with Discretized Logistic Mixture Likelihood and Other Modifications, by Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma, and Yaroslav Bulatov.

Our work builds on PixelCNNs that were originally proposed in van der Oord et al. in June 2016. PixelCNNs are a class of powerful generative models with tractable likelihood that are also easy to sample from. The core convolutional neural network computes a probability distribution over a value of one pixel conditioned on the values of pixels to the left and above it. Below are example samples from a model trained on CIFAR-10 that achieves 2.92 bits per dimension (compared to 3.03 of the PixelCNN in van der Oord et al.):

Samples from the model (left) and samples from a model that is conditioned on the CIFAR-10 class labels (right):

Improved PixelCNN papers

This code supports multi-GPU training of our improved PixelCNN on CIFAR-10 and Small ImageNet, but is easy to adapt for additional datasets. Training on a machine with 8 Maxwell TITAN X GPUs achieves 3.0 bits per dimension in about 10 hours and it takes approximately 5 days to converge to 2.92.

Setup

To run this code you need the following:

  • a machine with multiple GPUs
  • Python3
  • Numpy, TensorFlow and imageio packages:
pip install numpy tensorflow-gpu imageio

Training the model

Use the train.py script to train the model. To train the default model on CIFAR-10 simply use:

python3 train.py

You might want to at least change the --data_dir and --save_dir which point to paths on your system to download the data to (if not available), and where to save the checkpoints.

I want to train on fewer GPUs. To train on fewer GPUs we recommend using CUDA_VISIBLE_DEVICES to narrow the visibility of GPUs to only a few and then run the script. Don't forget to modulate the flag --nr_gpu accordingly.

I want to train on my own dataset. Have a look at the DataLoader classes in the data/ folder. You have to write an analogous data iterator object for your own dataset and the code should work well from there.

Pretrained model checkpoint

You can download our pretrained (TensorFlow) model that achieves 2.92 bpd on CIFAR-10 here (656MB).

Citation

If you find this code useful please cite us in your work:

@inproceedings{Salimans2017PixeCNN,
  title={PixelCNN++: A PixelCNN Implementation with Discretized Logistic Mixture Likelihood and Other Modifications},
  author={Tim Salimans and Andrej Karpathy and Xi Chen and Diederik P. Kingma},
  booktitle={ICLR},
  year={2017}
}

pixel-cnn-rotations