GitXplorerGitXplorer
p

videoMultiGAN

public
64 stars
23 forks
1 issues

Commits

List of commits on branch master.
Unverified
762b54fa7385e7b2d41743c0c9f6b11dfd483c2d

Adding relevant images

pprannayk committed 7 years ago
Unverified
8c78b825267dda066bbd6f493b61364a2cdb72c2

Adding relevant images

pprannayk committed 7 years ago
Unverified
a5e7d85fe7437391e3530538378f1b76ff6267ac

Adding relevant images

pprannayk committed 7 years ago
Unverified
d9782b71d601ebae98d9398407464bcf16fcdc90

Adding video samp[lpe

pprannayk committed 7 years ago
Unverified
20496d4cf0bf8a32063aed21caa7f802b313bb15

Adding relevant images

pprannayk committed 7 years ago
Unverified
2ea54d35d83d253f1cc0b01090c7e9acfaabd4f1

Create vaegan.xml

pprannayk committed 7 years ago

README

The README file for this repository.

Video Multi GAN

Video Generation from Text using Tree like decision using GANs. The text annotation or statement is encoded using the LM into a embedding, which then is combined with random vector to generate relevant videos and images.

Video Generation models

  1. VAEGAN
  2. VAEGAN with Latent Variable optimization
  3. VAEGAN with anti reconstruction loss
  4. VAEGAN + Anti reconstruction loss + Latent variable models
  5. variants of above models with different Hyper parameters

Model structure

  • LSTM based model for next frame creation
  • Wasserstein GAN setting discriminator
  • Word embedding based LM
  • Attention based model for classification structure

Training model

  • The relevant models are in Tensorflow >= v1.2
  • Experimentation with above mentioned models
  • The training is done over self generated Bouncing MNIST with sentence based annotation
  • The gensim pre trained fastText wikipedia work embeddings are used for embedding tokens as vectors
  • Non attention based models are used initially to generate starting frames.
  • The GAN tree trains to look for discriminative features (unverified)

Datasets

  1. UCF101 : 3 channel image
  2. Bouncing MNIST

Documentation

  1. We use Sync-DRAW to develop our datasets (https://github.com/syncdraw/Sync-DRAW)
  2. UCF101 is available from University of Montreal
  3. We use multiple GPU training (or a single K80 or Titan X)
  4. Cluster traning is impossible for now

Results will not be updated here since there might be related publications.