A Cog wrapper for Stable Zero123, based on the Stable Diffusion 1.5 framework, is designed for the generation of 3D object representations from multiple views. It incorporates Score Distillation Sampling (SDS) to optimize Neural Radiance Fields (NeRF), facilitating the creation of textured 3D meshes. The API also supports the conversion of text descriptions into 3D objects, initially using SDXL for image generation and subsequently applying the Stable Zero123 model for 3D generation. See the official announcement and Hugging Face model page.
Note: This implementation is adapted from threestudio's implementation for 3D generation.
You need to have Cog and Docker installed to run this model locally. To use Stable Zero123, you need to upload an image or provide a text prompt for the desired object to be generated. The outputs will be either multi-view images of the object or a 3D object file in .glb format.
To build the docker image with cog and run a prediction:
cog predict -i image=@dragon_rgba.png
To start a server and send requests to your locally or remotely deployed API:
cog run -p 5000 python -m cog.server.http
Input parameters are as follows:
- image: Input image, optional. This argument allows for the specification of an image file path.
- prompt: Prompt to generate image from SDXL, only used if no input image is provided.
- num_views: Number of views to generate.
- guidance_scale: Scale of guidance loss for SDXL. Higher values will make the model focus more on the prompt.
- num_inference_steps: Number of inference steps to run SDXL.
- num_multiview_steps: Number of inference steps to run for multi-view image generation of Stable-Zero123.
- num_3d_max_steps: Maximum number of training steps for 3D generation.
- remove_background: Whether to remove image background. Set to false only if uploading an image with a removed background.
- return_3d: Whether to return a 3D object as output. If set to false, only returns multi-views generated by the multi-view diffusion backbone.