GitXplorerGitXplorer
f

ConvNeXt

public
5847 stars
703 forks
58 issues

Commits

List of commits on branch main.
Verified
048efcea897d999aed302f2639b6270aedf8d4c8

Update README.md

lliuzhuang13 committed 2 years ago
Verified
ac57757110b8691642a7ac8aebccf1c940fa3a19

Update README.md

lliuzhuang13 committed 2 years ago
Unverified
d1fa8f6fef0a165b27399986cc2bdacc92777e40

fix typo

lliuzhuang13 committed 3 years ago
Unverified
06f7b05f922e21914916406141f50f82b4a15852

add tiny/small 22k models

aanonymouscommitter committed 3 years ago
Verified
6e08219f9e4c527443d2e44643a9dba79d995b07

Update README.md

ss9xie committed 3 years ago
Verified
83f3920682c254f7b2d5fca63e2847bfe57b75d4

Update README.md

ss9xie committed 3 years ago

README

The README file for this repository.

Official PyTorch implementation of ConvNeXt, from the following paper:

A ConvNet for the 2020s. CVPR 2022.
Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell and Saining Xie
Facebook AI Research, UC Berkeley
[arXiv][video]


We propose ConvNeXt, a pure ConvNet model constructed entirely from standard ConvNet modules. ConvNeXt is accurate, efficient, scalable and very simple in design.

Catalog

  • [x] ImageNet-1K Training Code
  • [x] ImageNet-22K Pre-training Code
  • [x] ImageNet-1K Fine-tuning Code
  • [x] Downstream Transfer (Detection, Segmentation) Code
  • [x] Image Classification [Colab] and Web Demo Hugging Face Spaces
  • [x] Fine-tune on CIFAR with Weights & Biases logging [Colab]

Results and Pre-trained Models

ImageNet-1K trained models

name resolution acc@1 #params FLOPs model
ConvNeXt-T 224x224 82.1 28M 4.5G model
ConvNeXt-S 224x224 83.1 50M 8.7G model
ConvNeXt-B 224x224 83.8 89M 15.4G model
ConvNeXt-B 384x384 85.1 89M 45.0G model
ConvNeXt-L 224x224 84.3 198M 34.4G model
ConvNeXt-L 384x384 85.5 198M 101.0G model

ImageNet-22K trained models

name resolution acc@1 #params FLOPs 22k model 1k model
ConvNeXt-T 224x224 82.9 29M 4.5G model model
ConvNeXt-T 384x384 84.1 29M 13.1G - model
ConvNeXt-S 224x224 84.6 50M 8.7G model model
ConvNeXt-S 384x384 85.8 50M 25.5G - model
ConvNeXt-B 224x224 85.8 89M 15.4G model model
ConvNeXt-B 384x384 86.8 89M 47.0G - model
ConvNeXt-L 224x224 86.6 198M 34.4G model model
ConvNeXt-L 384x384 87.5 198M 101.0G - model
ConvNeXt-XL 224x224 87.0 350M 60.9G model model
ConvNeXt-XL 384x384 87.8 350M 179.0G - model

ImageNet-1K trained models (isotropic)

name resolution acc@1 #params FLOPs model
ConvNeXt-S 224x224 78.7 22M 4.3G model
ConvNeXt-B 224x224 82.0 87M 16.9G model
ConvNeXt-L 224x224 82.6 306M 59.7G model

Installation

Please check INSTALL.md for installation instructions.

Evaluation

We give an example evaluation command for a ImageNet-22K pre-trained, then ImageNet-1K fine-tuned ConvNeXt-B:

Single-GPU

python main.py --model convnext_base --eval true \
--resume https://dl.fbaipublicfiles.com/convnext/convnext_base_22k_1k_224.pth \
--input_size 224 --drop_path 0.2 \
--data_path /path/to/imagenet-1k

Multi-GPU

python -m torch.distributed.launch --nproc_per_node=8 main.py \
--model convnext_base --eval true \
--resume https://dl.fbaipublicfiles.com/convnext/convnext_base_22k_1k_224.pth \
--input_size 224 --drop_path 0.2 \
--data_path /path/to/imagenet-1k

This should give

* Acc@1 85.820 Acc@5 97.868 loss 0.563
  • For evaluating other model variants, change --model, --resume, --input_size accordingly. You can get the url to pre-trained models from the tables above.
  • Setting model-specific --drop_path is not strictly required in evaluation, as the DropPath module in timm behaves the same during evaluation; but it is required in training. See TRAINING.md or our paper for the values used for different models.

Training

See TRAINING.md for training and fine-tuning instructions.

Acknowledgement

This repository is built using the timm library, DeiT and BEiT repositories.

License

This project is released under the MIT license. Please see the LICENSE file for more information.

Citation

If you find this repository helpful, please consider citing:

@Article{liu2022convnet,
  author  = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
  title   = {A ConvNet for the 2020s},
  journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year    = {2022},
}