GitXplorerGitXplorer
s

hp-vae-gan

public
57 stars
13 forks
5 issues

Commits

List of commits on branch master.
Verified
f372e8442e635a66d78948c13057f3ff23007ea2

Update README.md

ssagiebenaim committed 4 years ago
Unverified
c8031b6322ac685934160ff3ebd257b2e66b48a4

Generate Samples

sshirgur committed 4 years ago
Unverified
1d959e04c9fce785162ca2d1b0b8fc55484af377

Generate Samples

sshirgur committed 4 years ago
Unverified
116ca158e47c8eafdb0d7b8bb2be753cdf7722fb

Compat-Fix

sshirgur committed 4 years ago
Unverified
37a0d6493eb2c4ecb75e45a7ec53dfb4c67d0a4a

Merge remote-tracking branch 'origin/master'

sshirgur committed 4 years ago
Unverified
b4f8d9b5efe833eed93efbc0c3a11c653d2ad543

Compat-Fix

sshirgur committed 4 years ago

README

The README file for this repository.

Hierarchical Patch VAE-GAN

Official repository of the paper "Hierarchical Patch VAE-GAN: Generating Diverse Videos from a Single Sample" (NeurIPS 2020)

Project | arXiv | Code

Real Videos







Fake Videos







Environment setting

Use commands in env.sh to setup the correct conda environment

Colab

An example for training and extracting samples for image generation. The same can be easily modified for video generation using *_video(s).py files. https://colab.research.google.com/drive/1SmxFVqUvEkU7pHIwyLUz4VM1AxoVU-ER?usp=sharing

Training Video

For training a single video, use the following command for example:

CUDA_VISIBLE_DEVICES=0 python train_video.py --video-path data/vids/air_balloons.mp4 --vae-levels 3 --checkname myvideotest --visualize

Common training options:

# Networks Hyper Parameters
--nfc                model basic # channels
--latent-dim         Latent dim size
--vae-levels         # VAE levels
--generator          generator mode

# Optimization hyper parameters
--niter              number of iterations to train per scale
--rec-weight         reconstruction loss weight
--train-all          train all levels w.r.t. train-depth

# Dataset
--video-path         video path (required)
--start-frame        start frame number
--max-frames         # frames to save
--sampling-rates     sampling rates

# Misc
--visualize     visualize using tensorboard

Training Image

For training a single video, use the following command for example:

CUDA_VISIBLE_DEVICES=0 python train_image.py --image-path data/imgs/air_balloons.jpg --vae-levels 3 --checkname myimagetest --visualize

Training baselines for video

For training a single video using SinGan re-implementation, use the following command:

CUDA_VISIBLE_DEVICES=0 python train_video_baselines.py --video-path data/vids/air_balloons.mp4 --checkname myimagetest --visualize --generator GeneratorSG --train-depth 1

Generating Samples

Use eval_*.py to generate samples from an "experiment" folder created during training. The code uses Glob package for multiple experiments evaluation, for example, the following line will generate 100 video samples for all trained movies:

python eval_video.py --num-samples 100 --exp-dir run/**/*/experiment_0

results are saved under run/**/*/experiment_0/eval

In order to extract gifs and images, use the extract_*.py files similarly:

python eval_video.py --max-samples 4 --exp-dir run/**/*/experiment_0/eval

results are saved under run/**/*/experiment_0/eval/gifs(images).

Citation

If you found this work useful, please cite.

@article{gur2020hierarchical,
  title={Hierarchical Patch VAE-GAN: Generating Diverse Videos from a Single Sample},
  author={Gur, Shir and Benaim, Sagie and Wolf, Lior},
  journal={arXiv preprint arXiv:2006.12226},
  year={2020}
}