GitXplorerGitXplorer
a

cog-stable-zero123

public
3 stars
0 forks
0 issues

Commits

List of commits on branch main.
Verified
6019da2d2ef23f74d33ec053d27cff3ac1cf87b4

Update README.md

aalaradirik committed a year ago
Verified
3b416cf49e1f623cbc1781cadd3b4759e7b01d90

Update README.md

aalaradirik committed a year ago
Unverified
c6f437c0070d1a57ebf5368fff9a827a445e4c93

add files

aalaradirik committed a year ago
Verified
266e39dacc546edba0e778b27d49ba2b879c89f6

Initial commit

aalaradirik committed a year ago

README

The README file for this repository.

Cog wrapper for Stable Zero123

A Cog wrapper for Stable Zero123, based on the Stable Diffusion 1.5 framework, is designed for the generation of 3D object representations from multiple views. It incorporates Score Distillation Sampling (SDS) to optimize Neural Radiance Fields (NeRF), facilitating the creation of textured 3D meshes. The API also supports the conversion of text descriptions into 3D objects, initially using SDXL for image generation and subsequently applying the Stable Zero123 model for 3D generation. See the official announcement and Hugging Face model page.

Note: This implementation is adapted from threestudio's implementation for 3D generation.

API Usage

You need to have Cog and Docker installed to run this model locally. To use Stable Zero123, you need to upload an image or provide a text prompt for the desired object to be generated. The outputs will be either multi-view images of the object or a 3D object file in .glb format.

To build the docker image with cog and run a prediction:

cog predict -i image=@dragon_rgba.png

To start a server and send requests to your locally or remotely deployed API:

cog run -p 5000 python -m cog.server.http

Input parameters are as follows:

  • image: Input image, optional. This argument allows for the specification of an image file path.
  • prompt: Prompt to generate image from SDXL, only used if no input image is provided.
  • num_views: Number of views to generate.
  • guidance_scale: Scale of guidance loss for SDXL. Higher values will make the model focus more on the prompt.
  • num_inference_steps: Number of inference steps to run SDXL.
  • num_multiview_steps: Number of inference steps to run for multi-view image generation of Stable-Zero123.
  • num_3d_max_steps: Maximum number of training steps for 3D generation.
  • remove_background: Whether to remove image background. Set to false only if uploading an image with a removed background.
  • return_3d: Whether to return a 3D object as output. If set to false, only returns multi-views generated by the multi-view diffusion backbone.