GitXplorerGitXplorer
f

sapiens

public
3905 stars
199 forks
0 issues

Commits

List of commits on branch main.
Unverified
3cc7321e97cd630ce9ddcf176fda47edabcd2f47

Add sapiens-0.6b pose

committed a day ago
Unverified
3b3108b76c88baa3ff140fed329a1609d72d27e8

Update docs

committed 2 days ago
Unverified
b45a199b9b6fe3488059241ab7c549077886510a

Add initial commit

committed a month ago

README

The README file for this repository.

Sapiens

Foundation for Human Vision Models

Rawal Khirodkar ยท Timur Bagautdinov ยท Julieta Martinez ยท Su Zhaoen ยท Austin James
Peter Selednik . Stuart Anderson . Shunsuke Saito

ECCV 2024 (Oral)

Project Page Paper PDF Spaces Results

Sapiens offers a comprehensive suite for human-centric vision tasks (e.g., 2D pose, part segmentation, depth, normal, etc.). The model family is pretrained on 300 million in-the-wild human images and shows excellent generalization to unconstrained conditions. These models are also designed for extracting high-resolution features, having been natively trained at a 1024 x 1024 image resolution with a 16-pixel patch size.

01 03

02 04

๐Ÿš€ Getting Started

Clone the Repository

git clone https://github.com/facebookresearch/sapiens.git
export SAPIENS_ROOT=/path/to/sapiens

Recommended: Lite Installation (Inference-only)

For users setting up their own environment primarily for running existing models in inference mode, we recommend the Sapiens-Lite installation.
This setup offers optimized inference (4x faster) with minimal dependencies (only PyTorch + numpy + cv2).

Full Installation

To replicate our complete training setup, run the provided installation script.
This will create a new conda environment named sapiens and install all necessary dependencies.

cd $SAPIENS_ROOT/_install
./conda.sh

Please download the original checkpoints from hugging-face.
You can be selective about only downloading the checkpoints of interest.
Set $SAPIENS_CHECKPOINT_ROOT to be the path to the sapiens_host folder. Place the checkpoints following this directory structure:

sapiens_host/
โ”œโ”€โ”€ detector/
โ”‚   โ””โ”€โ”€ checkpoints/
โ”‚       โ””โ”€โ”€ rtmpose/
โ”œโ”€โ”€ pretrain/
โ”‚   โ””โ”€โ”€ checkpoints/
โ”‚       โ”œโ”€โ”€ sapiens_0.3b/
            โ”œโ”€โ”€ sapiens_0.3b_epoch_1600_clean.pth
โ”‚       โ”œโ”€โ”€ sapiens_0.6b/
            โ”œโ”€โ”€ sapiens_0.6b_epoch_1600_clean.pth
โ”‚       โ”œโ”€โ”€ sapiens_1b/
โ”‚       โ””โ”€โ”€ sapiens_2b/
โ”œโ”€โ”€ pose/
   โ””โ”€โ”€ checkpoints/
      โ”œโ”€โ”€ sapiens_0.3b/
โ””โ”€โ”€ seg/
โ””โ”€โ”€ depth/
โ””โ”€โ”€ normal/

๐ŸŒŸ Human-Centric Vision Tasks

We finetune sapiens for multiple human-centric vision tasks. Please checkout the list below.

๐ŸŽฏ Easy Steps to Finetuning Sapiens

Finetuning our models is super-easy! Here is a detailed training guide for the following tasks.

๐Ÿ“ˆ Quantitative Evaluations

๐Ÿค Acknowledgements & Support & Contributing

We would like to acknowledge the work by OpenMMLab which this project benefits from.
For any questions or issues, please open an issue in the repository.
See contributing and the code of conduct.

License

This project is licensed under LICENSE.
Portions derived from open-source projects are licensed under Apache 2.0.

๐Ÿ“š Citation

If you use Sapiens in your research, please consider citing us.

@misc{khirodkar2024_sapiens,
    title={Sapiens: Foundation for Human Vision Models},
    author={Khirodkar, Rawal and Bagautdinov, Timur and Martinez, Julieta and Zhaoen, Su and James, Austin and Selednik, Peter and Anderson, Stuart and Saito, Shunsuke},
    year={2024},
    eprint={2408.12569},
    archivePrefix={arXiv},
    primaryClass={cs.CV},
    url={https://arxiv.org/abs/2408.12569}
}