You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
tky_bd_macpro 92aba2d72c Merge remote-tracking branch 'origin/main' 2 years ago
downstream_d2 [upd] READMEs 2 years ago
downstream_imagenet [upd] READMEs & cfgs 2 years ago
downstream_mmdet [upd] README 2 years ago
models [upd] refactor; add ImageNet fine-tuning 2 years ago
utils [refactor] remove useless codes 2 years ago
.gitignore [initial commit] 2 years ago
INSTALL.md [upd] refactor; add ImageNet fine-tuning 2 years ago
LICENSE [upd] add more comments; change the LICENSE 2 years ago
PRETRAIN.md [upd] READMEs 2 years ago
README.md Update README.md 2 years ago
decoder.py [upd] refactor; add ImageNet fine-tuning 2 years ago
dist.py [upd] refactor; add ImageNet fine-tuning 2 years ago
encoder.py [upd] refactor; add ImageNet fine-tuning 2 years ago
launch.py [upd] READMEs 2 years ago
main.py [upd] refactor; add ImageNet fine-tuning 2 years ago
main.sh [upd] refactor; add ImageNet fine-tuning 2 years ago
requirements.txt [upd] refactor; add ImageNet fine-tuning 2 years ago
sampler.py [refactor] remove useless codes 2 years ago
spark.py [upd] refactor; add ImageNet fine-tuning 2 years ago

README.md

SparK: The first successful BERT-style pre-training on any convolutional networks arXiv, ICLR'23 Spotlight

Official implementation of "Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling".

🔥 News

  • The share on TechBeat (将门创投) is scheduled on Mar. 16th (UTC+0 12am, UTC+8 8pm) too! [Recorded Video]
  • We are honored to be invited by Synced ("机器之心机动组 视频号" on WeChat) to give a talk about SparK on Feb. 27th (UTC+0 11am, UTC+8 7pm), welcome! [Recorded Video]
  • This work got accepted to ICLR 2023 as a Spotlight (notable-top-25%).

Video demo

https://user-images.githubusercontent.com/6366788/213662770-5f814de0-cbe8-48d9-8235-e8907fd81e0e.mp4

What's new here?

🔥 On ResNets, generative pre-training surpasses contrastive learning for the first time:

🔥 ConvNeXt gains more from pre-training than Swin-Transformer, up to +3.5 points:

🔥 Larger models benefit more from SparK pre-training, showing a scaling behavior:

🔥 Pre-trained model can make reasonable predictions:

See our paper for more analysis, discussions, and evaluations.

Catalog

  • Pre-training code
  • Fine-tuning code
  • Colab visualization playground
  • Weights & visualization playground on Huggingface
  • Weights in timm

ImageNet-1k results and pre-trained networks weights

Note: for network definitions, we directly use timm.models.ResNet and official ConvNeXt.

arch. acc@1 #params flops model
ResNet50 80.6 26M 4.1G drive
ResNet101 82.2 45M 7.9G drive
ResNet152 82.7 60M 11.6G drive
ResNet200 83.1 65M 15.1G drive
ConvNeXt-S 84.1 50M 8.7G drive
ConvNeXt-B 84.8 89M 15.4G drive
ConvNeXt-L 85.4 198M 34.4G drive

Installation

For pre-training and fine-tuning on ImageNet-1k, we highly recommended you to use torch==1.10.0, torchvision==0.11.1, and timm==0.5.4.

Check INSTALL.md to install all dependencies for pre-training and ImageNet fine-tuning.

Pre-training

See PRETRAIN.md to pre-train models on ImageNet-1k.

Fine-tuning

  • Models on ImageNet: after installation, check downstream_imagenet for subsequent instructions.
  • ResNets on COCO: install detectron2 and see downstream_d2 for more details.
  • ConvNeXts on COCO: install mmcv and mmdetection then see downstream_mmdet for more details.

Acknowledgement

We referred to these useful codebases:

We also appreciate these elegant frameworks:

License

This project is under the MIT license. See LICENSE for more details.

Citation

If you found this project useful, please consider adding a star , or citing us 📖:

@Article{tian2023designing,
  author  = {Keyu Tian and Yi Jiang and Qishuai Diao and Chen Lin and Liwei Wang and Zehuan Yuan},
  title   = {Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling},
  journal = {arXiv:2301.03580},
  year    = {2023},
}