GitXplorerGitXplorer
a

coma

public
458 stars
100 forks
20 issues

Commits

List of commits on branch master.
Verified
44daa2089d33f5c0a6c83d4473140db749b9995c

Merge pull request #30 from QianliM/master

aanuragranj committed 4 years ago
Unverified
9cbeff6e2cff2f8a095a82b8213218ea32e3f8a3

Update README.md

qqianlim committed 4 years ago
Verified
cf3ee51c4daf45ba7f096cb02f7030e05f648512

Update README.md

aanuragranj committed 5 years ago
Verified
cef64337f9314cc13bf98274b34720291c3aa955

Update README.md

aanuragranj committed 5 years ago
Verified
21d41208c12d33dcf5fdc06b6b984d0a3bb41e73

Update README.md

aanuragranj committed 5 years ago
Verified
b1d43b2035ef947578e4165f21ab290e18b746a7

Update README.md

TTimoBolkart committed 6 years ago

README

The README file for this repository.

CoMA: Convolutional Mesh Autoencoders

Generating 3D Faces using Convolutional Mesh Autoencoders

This is an official repository of Generating 3D Faces using Convolutional Mesh Autoencoders

[Project Page][Arxiv]

UPDATE : Thank you for using and supporting this repository over the last two years. This will no longer be maintained. Alternatively, please use:

Requirements

This code is tested on Tensorflow 1.3. Requirements (including tensorflow) can be installed using:

pip install -r requirements.txt

Install mesh processing libraries from MPI-IS/mesh.

Data

Download the data from the Project Page.

Preprocess the data

python processData.py --data <PATH_OF_RAW_DATA> --save_path <PATH_TO_SAVE_PROCESSED DATA>

Data pre-processing creates numpy files for the interpolation experiment and extrapolation experiment (Section X of the paper). This creates 13 different train and test files. sliced_[train|test] is for the interpolation experiment. <EXPRESSION>_[train|test] are for cross validation cross 12 different expression sequences.

Training

To train, specify a name, and choose a particular train test split. For example,

python main.py --data data/sliced --name sliced

Testing

To test, specify a name, and data. For example,

python main.py --data data/sliced --name sliced --mode test

Reproducing results in the paper

Run the following script. The models are slightly better (~1% on average) than ones reported in the paper.

sh generateErrors.sh

Sampling

To sample faces from the latent space, specify a model and data. For example,

python main.py --data data/sliced --name sliced --mode latent

A face template pops up. You can then use the keys qwertyui to sample faces by moving forward in each of the 8 latent dimensions. Use asdfghjk to move backward in the latent space.

For more flexible usage, refer to lib/visualize_latent_space.py.

Acknowledgements

We thank Raffi Enficiaud and Ahmed Osman for pushing the release of psbody.mesh, an essential dependency for this project.

License

The code contained in this repository is under MIT License and is free for commercial and non-commercial purposes. The dependencies, in particular, MPI-IS/mesh and our data have their own license terms which can be found on their respective webpages. The dependencies and data are NOT covered by MIT License associated with this repository.

Related projects

CAPE (CVPR 2020): Based on CoMA, we build a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making a generative, animatable model of people in clothing. A large-scale mesh dataset of clothed humans in motion is also included!

When using this code, please cite

Anurag Ranjan, Timo Bolkart, Soubhik Sanyal, and Michael J. Black. "Generating 3D faces using Convolutional Mesh Autoencoders." European Conference on Computer Vision (ECCV) 2018.