You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
3.8 KiB
3.8 KiB
SparK✨: the first successful BERT-style pre-training on any convolutional networks
This is an official implementation of the paper "Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling". (submitted to openreview ICLR'23 in Oct. 2022)
What's new here?
🔥 On ResNets, generative pre-training surpasses contrastive learning for the first time:
🔥 ConvNeXt gains more from pre-training than Swin-Transformer, up to +3.5 points:
🔥 Larger models benefit more from SparK pre-training, showing a scaling behavior:
🔥 Pre-trained model can make reasonable predictions:
See our paper for more analysis, discussions, and evaluations.
Catalog
- Pre-training code
- Fine-tuning code
- Colab playground
- Inference and visualization demo
Install
Check INSTALL.md to install all dependencies. Our implementation is based on torch==1.10.0+cu113
, torchvision==0.11.1+cu113
, and timm==0.5.4
. This sparse convolution framework is an optional library.
Pre-training
See PRETRAIN.md to pre-train models on ImageNet.
Fine-tuning
- Models on ImageNet: after installation, check downstream_imagenet for subsequent instructions.
- ResNets on COCO: install
detectron2
and see downstream_d2 for more details. - ConvNeXts on COCO: install
mmcv
andmmdetection
then see downstream_mmdet for more details.
Acknowledgement
We heavily referred to these useful codebases:
We also appreciate these elegant frameworks:
License
This project is under the CC-BY 4.0 license. See LICENSE for more details.
Citation
If you found this project useful, please consider adding a star ⭐, or citing us 📖:
@Article{tian2023designing,
author = {Keyu Tian and Yi Jiang and Qishuai Diao and Chen Lin and Liwei Wang and Zehuan Yuan},
title = {Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling},
journal = {arXiv:2301.03580},
year = {2023},
}