diff --git a/README.md b/README.md index d9a2da8..db361de 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,7 @@ -# SparK: BERT/MAE-style Pretraining on Any Convolutional Networks [![Reddit](https://img.shields.io/badge/Reddit-🔥%20120k%20views-b31b1b.svg?style=social&logo=reddit)](https://www.reddit.com/r/MachineLearning/comments/10ix0l1/r_iclr2023_spotlight_the_first_bertstyle/) [![Twitter](https://img.shields.io/badge/Twitter-🔥%2020k%2B120k%20views-b31b1b.svg?style=social&logo=twitter)](https://twitter.com/keyutian/status/1616606179144380422) +# SparK: the first successful BERT/MAE-style pretraining on any convolutional networks  [![Reddit](https://img.shields.io/badge/Reddit-🔥%20120k%20views-b31b1b.svg?style=social&logo=reddit)](https://www.reddit.com/r/MachineLearning/comments/10ix0l1/r_iclr2023_spotlight_the_first_bertstyle/) [![Twitter](https://img.shields.io/badge/Twitter-🔥%2020k%2B120k%20views-b31b1b.svg?style=social&logo=twitter)](https://twitter.com/keyutian/status/1616606179144380422) -Implementation of the paper [Designing BERT for Convolutional Networks: ***Spar***se and Hierarchical Mas***k***ed Modeling](https://arxiv.org/abs/2301.03580). +This is the official implementation of ICLR paper [Designing BERT for Convolutional Networks: ***Spar***se and Hierarchical Mas***k***ed Modeling](https://arxiv.org/abs/2301.03580). +We've tried our best to make the codebase clean, short, easy to read, state-of-the-art, and only rely on minimal dependencies.

@@ -132,7 +133,7 @@ This project is under the MIT license. See [LICENSE](LICENSE) for more details. ## Citation -If you found this project useful, you may consider staring ⭐, or citing us 📖: +If you found this project useful, you can kindly give us a star ⭐, or cite us in your work 📖: ``` @Article{tian2023designing, author = {Keyu Tian and Yi Jiang and Qishuai Diao and Chen Lin and Liwei Wang and Zehuan Yuan},