GitXplorerGitXplorer
l

unified-gan-tensorflow

public
116 stars
34 forks
1 issues

Commits

List of commits on branch master.
Unverified
317c1b6ec4d00db0d486dfce2965cb27156d334d

keep in one line

committed 7 years ago
Unverified
92711782a0aacdc28c6971523b0bfd99ebed9380

Add test run demos

committed 7 years ago
Unverified
be42f2cc9531b4b6a5c01edbfc8ff3c7a106503a

Add data/.placeholder

committed 7 years ago
Unverified
6133eb83d99189843c727cfa2e76587d3ac2e158

tiny edits in README.md

committed 7 years ago
Unverified
fa59b02ff44cd29cdb1d2f6a02dc9d1ab2184869

more cleanup and readme

committed 7 years ago
Unverified
34e40482472c738ab504ed30c5beed2621c9d23a

more comments and cleanup

committed 7 years ago

README

The README file for this repository.

Original, Wasserstein, and Wasserstein-Gradient-Penalty DCGAN

(*) This repo is a modification of carpedm20/DCGAN-tensorflow.

(*) The full credit of the model structure design goes to carpedm20/DCGAN-tensorflow.

I started with carpedm20/DCGAN-tensorflow because its DCGAN implementation is not fixed for one dataset, which is not a common setting. Most WGAN and WGAN-GP implementations only work on 'mnist' or one given dataset.

Modifications

A couple of modifications I've made that could be helpful to people who try to implement GAN on their own for the first time.

  1. Added model_type which could be one of 'GAN' (original), 'WGAN' (Wasserstein distance as loss), and 'WGAN_GP' (Wasserstein distance as loss function with gradient penalty), each corresponding to one variation of GAN model.
  2. UnifiedDCGAN can build and train the graph differently according to model_type.
  3. Some model methods were reconstructed so that the code is easier to read through.
  4. Many comments were added for important, or potential confusing functions, like conv and deconv operations in ops.py.

The download.py file stays same as in carpedm20/DCGAN-tensorflow. I keep this file in the repo for the sake of easily fetching dataset for testing.

Reading

If you are interested in the math behind the loss functions of GAN and WGAN, read here.

Related Papers

Test Runs:

(left) python main.py --dataset=mnist --model_type=GAN --batch_size=64 --input_height=28 --output_height=28 --max_iter=10000 --learning_rate=0.0002 --train
(middle) python main.py --dataset=mnist --model_type=WGAN --batch_size=64 --input_height=28 --output_height=28 --d_iter=5 --max_iter=10000 --learning_rate=0.00005 --train
(right) python main.py --dataset=mnist --model_type=WGAN_GP --batch_size=64 --input_height=28 --output_height=28 --d_iter=5 --max_iter=10000 --learning_rate=0.0001 --train