From 395548e221c616f4ec66195b12a14284abb868ee Mon Sep 17 00:00:00 2001 From: keyu tian Date: Fri, 17 Mar 2023 01:43:59 +0800 Subject: [PATCH] [upd] READMEs --- README.md | 8 ++++---- downstream_imagenet/README.md | 2 +- PRETRAIN.md => pretrain/README.md | 6 +++--- 3 files changed, 8 insertions(+), 8 deletions(-) rename PRETRAIN.md => pretrain/README.md (90%) diff --git a/README.md b/README.md index d14cf79..36c8946 100644 --- a/README.md +++ b/README.md @@ -103,14 +103,14 @@ Check [INSTALL.md](INSTALL.md) to install all dependencies for pre-training and ## Pre-training -See [PRETRAIN.md](PRETRAIN.md) to pre-train models on ImageNet-1k. +See [pretrain/](pretrain) to pre-train models on ImageNet-1k. ## Fine-tuning -- Models on ImageNet: after installation, check [downstream_imagenet](downstream_imagenet) for subsequent instructions. -- ResNets on COCO: install `detectron2` and see [downstream_d2](downstream_d2) for more details. -- ConvNeXts on COCO: install `mmcv` and `mmdetection` then see [downstream_mmdet](downstream_mmdet) for more details. +- All models on ImageNet: check [downstream_imagenet/](downstream_imagenet) for subsequent instructions. +- ResNets on COCO: see [downstream_d2/](downstream_d2) for details. +- ConvNeXts on COCO: see [downstream_mmdet/](downstream_mmdet) for details. ## Acknowledgement diff --git a/downstream_imagenet/README.md b/downstream_imagenet/README.md index 90979e4..d090507 100644 --- a/downstream_imagenet/README.md +++ b/downstream_imagenet/README.md @@ -5,7 +5,7 @@ This `downstream_imagenet` is isolated from pre-training codes. One can treat th ## Preparation for ImageNet-1k fine-tuning -See [INSTALL.md](https://github.com/keyu-tian/SparK/blob/main/INSTALL.md) to prepare dependencies and ImageNet dataset. +See [INSTALL.md](https://github.com/keyu-tian/SparK/blob/main/INSTALL.md) to prepare `pip` dependencies and the ImageNet dataset. **Note: for network definitions, we directly use `timm.models.ResNet` and [official ConvNeXt](https://github.com/facebookresearch/ConvNeXt/blob/048efcea897d999aed302f2639b6270aedf8d4c8/models/convnext.py).** diff --git a/PRETRAIN.md b/pretrain/README.md similarity index 90% rename from PRETRAIN.md rename to pretrain/README.md index fb361c1..fb261b9 100644 --- a/PRETRAIN.md +++ b/pretrain/README.md @@ -1,6 +1,6 @@ ## Preparation for ImageNet-1k fine-tuning -See [INSTALL.md](https://github.com/keyu-tian/SparK/blob/main/INSTALL.md) to prepare dependencies and ImageNet dataset. +See [INSTALL.md](https://github.com/keyu-tian/SparK/blob/main/INSTALL.md) to prepare `pip` dependencies and the ImageNet dataset. **Note: for network definitions, we directly use `timm.models.ResNet` and [official ConvNeXt](https://github.com/facebookresearch/ConvNeXt/blob/048efcea897d999aed302f2639b6270aedf8d4c8/models/convnext.py).** @@ -10,7 +10,7 @@ See [INSTALL.md](https://github.com/keyu-tian/SparK/blob/main/INSTALL.md) to pre Run [main.sh](https://github.com/keyu-tian/SparK/blob/main/main.sh). It is **required** to specify ImageNet data folder and model name to run pre-training. -Besides, you can pass arbitrary key-word arguments (like `--ep=400 --bs=2048`) to `main.sh` to specify some pre-training hyperparameters (see [utils/arg_utils.py](https://github.com/keyu-tian/SparK/blob/main/utils/arg_utils.py) for all hyperparameters and their default values). +Besides, you can pass arbitrary key-word arguments (like `--ep=400 --bs=2048`) to `main.sh` to specify some pre-training hyperparameters (see [utils/arg_utils.py](https://github.com/keyu-tian/SparK/blob/main/pretrain/utils/arg_utils.py) for all hyperparameters and their default values). Here is an example command pre-training a ResNet50 on single machine with 8 GPUs: @@ -54,7 +54,7 @@ Add `--resume_from=path/to/still_pretraining.pth` to resume from a saved ## Regarding sparse convolution -For generality, we use the masked convolution implemented in [encoder.py](https://github.com/keyu-tian/SparK/blob/main/encoder.py) to simulate submanifold sparse convolution by default. +For generality, we use the masked convolution implemented in [encoder.py](https://github.com/keyu-tian/SparK/blob/main/pretrain/encoder.py) to simulate submanifold sparse convolution by default. **For anyone who might want to run SparK on another architectures**: