GitXplorerGitXplorer
o

consistency_models

public
6073 stars
409 forks
47 issues

Commits

List of commits on branch main.
Verified
e32b69ee436d518377db86fb2127a3972d0d8716

Merge pull request #43 from take2rohit/patch-1

yyang-song committed a year ago
Verified
86dac6dacd0cf81369482b595d32926c538c5765

Corrected Sample Code in README.md

ttake2rohit committed a year ago
Verified
edfe91ecd3bb1ac75166da4fac5b093977c81003

Merge pull request #40 from sayakpaul/patch-1

yyang-song committed a year ago
Verified
a9c4fbebc6b98e60e344d6482bf8b1e9c97776af

Update README.md

ssayakpaul committed a year ago
Verified
a5dbfcac6be9f7dd1c651437988be11b08eba565

Update README.md

ssayakpaul committed a year ago
Verified
ac278060af7175e37f4c1e79e69fa521234c04de

Merge pull request #13 from ashutosh1919/main

yyang-song committed a year ago

README

The README file for this repository.

Consistency Models

This repository contains the codebase for Consistency Models, implemented using PyTorch for conducting large-scale experiments on ImageNet-64, LSUN Bedroom-256, and LSUN Cat-256. We have based our repository on openai/guided-diffusion, which was initially released under the MIT license. Our modifications have enabled support for consistency distillation, consistency training, as well as several sampling and editing algorithms discussed in the paper.

The repository for CIFAR-10 experiments is in JAX and can be found at openai/consistency_models_cifar10.

Pre-trained models

We have released checkpoints for the main models in the paper. Before using these models, please review the corresponding model card to understand the intended use and limitations of these models.

Here are the download links for each model checkpoint:

Dependencies

To install all packages in this codebase along with their dependencies, run

pip install -e .

To install with Docker, run the following commands:

cd docker && make build && make run

Model training and sampling

We provide examples of EDM training, consistency distillation, consistency training, single-step generation, and multistep generation in scripts/launch.sh.

Evaluations

To compare different generative models, we use FID, Precision, Recall, and Inception Score. These metrics can all be calculated using batches of samples stored in .npz (numpy) files. One can evaluate samples with cm/evaluations/evaluator.py in the same way as described in openai/guided-diffusion, with reference dataset batches provided therein.

Use in 🧨 diffusers

Consistency models are supported in 🧨 diffusers via the ConsistencyModelPipeline class. Below we provide an example:

import torch

from diffusers import ConsistencyModelPipeline

device = "cuda"
# Load the cd_imagenet64_l2 checkpoint.
model_id_or_path = "openai/diffusers-cd_imagenet64_l2"
pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
pipe.to(device)

# Onestep Sampling
image = pipe(num_inference_steps=1).images[0]
image.save("consistency_model_onestep_sample.png")

# Onestep sampling, class-conditional image generation
# ImageNet-64 class label 145 corresponds to king penguins

class_id = 145
class_id = torch.tensor(class_id, dtype=torch.long)

image = pipe(num_inference_steps=1, class_labels=class_id).images[0]
image.save("consistency_model_onestep_sample_penguin.png")

# Multistep sampling, class-conditional image generation
# Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo.
# https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L77
image = pipe(timesteps=[22, 0], class_labels=class_id).images[0]
image.save("consistency_model_multistep_sample_penguin.png")

You can further speed up the inference process by using torch.compile() on pipe.unet (only supported from PyTorch 2.0). For more details, please check out the official documentation. This support was contributed to 🧨 diffusers by dg845 and ayushtues.

Citation

If you find this method and/or code useful, please consider citing

@article{song2023consistency,
  title={Consistency Models},
  author={Song, Yang and Dhariwal, Prafulla and Chen, Mark and Sutskever, Ilya},
  journal={arXiv preprint arXiv:2303.01469},
  year={2023},
}