- Welcome to our ICLR poster! (https://iclr.cc/virtual/2023/poster/12227).
- On **May. 11th** another livestream on OpenMMLab & ReadPaper (bilibili)! [[`📹Recorded Video`](https://www.bilibili.com/video/BV11s4y1M7qL/)]
- A brief introduction (in English) is available on [SlidesAlive](https://recorder-v3.slideslive.com/?share=81463&s=e4098919-55dc-431e-83dd-e1979e5c0faa) now!
- On **Apr. 27th (UTC+8 8pm)** another livestream would be held at [OpenMMLab (Bilibili)](https://space.bilibili.com/1293512903)!
- On **Mar. 22nd (UTC+8 8pm)** another livestream would be held at 极市平台 (Bilibili)! [[`📹Recorded Video`](https://www.bilibili.com/video/BV1Da4y1T7mr/)]
- On **Apr. 27th (UTC+8 8pm)** another livestream would be held at [OpenMMLab (bilibili)](https://space.bilibili.com/1293512903)!
- On **Mar. 22nd (UTC+8 8pm)** another livestream would be held at 极市平台 (bilibili)! [[`📹Recorded Video`](https://www.bilibili.com/video/BV1Da4y1T7mr/)]
- The share on [TechBeat (将门创投)](https://www.techbeat.net/talk-info?id=758) is scheduled on **Mar. 16th (UTC+8 8pm)** too! [[`📹Recorded Video`](https://www.techbeat.net/talk-info?id=758)]
- We are honored to be invited by Synced ("机器之心机动组 视频号" on WeChat) to give a talk about SparK on **Feb. 27th (UTC+0 11am, UTC+8 7pm)**, welcome! [[`📹Recorded Video`](https://www.bilibili.com/video/BV1J54y1u7U3/)]
- This work got accepted to ICLR 2023 as a Spotlight (notable-top-25%).
@ -47,6 +47,7 @@ See files under `--exp_dir` to track your experiment:
It also reports training loss/acc, best evaluation acc, and remaining time at each epoch.
- `tensorboard_log/`: saves a lot of tensorboard logs, you can visualize accuracies, loss values, learning rates, gradient norms and more things via `tensorboard --logdir /path/to/this/tensorboard_log/ --port 23333`.
For pretraining, run [/pretrain/main.py](/pretrain/main.py) with `torchrun`.
**It is required to specify** the ImageNet data folder (`--data_path`), your experiment name & log dir (`--exp_name` and `--exp_dir`, automatically created if not exists), and the model name (`--model`, valid choices see the keys of 'pretrain_default_model_kwargs' in [/pretrain/models/__init__.py line34](/pretrain/models/__init__.py#L34)).
**It is required to specify** the ImageNet data folder (`--data_path`), your experiment name & log dir (`--exp_name` and `--exp_dir`, automatically created if not exists), and the model name (`--model`, valid choices see the keys of 'pretrain_default_model_kwargs' in [/pretrain/models/\_\_init\_\_.py line34](/pretrain/models/__init__.py#L34)).
We use the **same** pretraining configurations (lr, batch size, etc.) for all models (ResNets and ConvNeXts) in 224 pretraining.
Their **names** and **default values** are in [/pretrain/utils/arg_util.py line23-44](/pretrain/utils/arg_util.py#L23-L44).
@ -82,6 +82,7 @@ See files under `--exp_dir` to track your experiment:
It also reports the loss and remaining pretraining time at each epoch.
- `tensorboard_log/`: saves a lot of tensorboard logs, you can visualize loss values, learning rates, gradient norms and more things via `tensorboard --logdir /path/to/this/tensorboard_log/ --port 23333`.
- `stdout_backup.txt` and `stderr_backup.txt`: will save all output to stdout/stderr