Tag:
Branch:
Tree:
e9cae2d078
2.x
3.x
dependabot/pip/requirements/onnx-1.13.0
detic
dev
dev-2.x
dev-3.x
dev-3.x-pat-ci
dev-pat-ci
main
master
refactor-detr
refactor_metrics
test_3.2
tracking
tutorials
yolov4
v0.5.7
v0.6.0
v0.6rc0
v1.0.0
v1.0rc0
v1.0rc1
v1.1.0
v1.2.0
v2.0.0
v2.1.0
v2.10.0
v2.11.0
v2.12.0
v2.13.0
v2.14.0
v2.15.0
v2.15.1
v2.16.0
v2.17.0
v2.18.0
v2.18.1
v2.19.0
v2.19.1
v2.2.0
v2.2.1
v2.20.0
v2.21.0
v2.22.0
v2.23.0
v2.24.0
v2.24.1
v2.25.0
v2.25.1
v2.25.2
v2.25.3
v2.26.0
v2.27.0
v2.28.0
v2.28.1
v2.28.2
v2.3.0
v2.4.0
v2.5.0
v2.6.0
v2.7.0
v2.8.0
v2.9.0
v3.0.0
v3.0.0rc0
v3.0.0rc1
v3.0.0rc2
v3.0.0rc3
v3.0.0rc4
v3.0.0rc5
v3.0.0rc6
v3.1.0
v3.2.0
v3.3.0
${ noResults }
2 Commits (e9cae2d0787cd5c2fc6165a6061f92fa09e48fb1)
Author | SHA1 | Message | Date |
---|---|---|---|
HinGwenWoong |
28022ba73d
|
[Feature] Support automatically scaling LR according to GPU number and samples per GPU (#7482)
* Add default_gpu_number flag in config : default_runtime.py * Support automatically scaling LR according to GPU number * Improve code comments * Improve code comments * Imporve variable naming * Improve log message for scale lr, add print samples_per_gpu * Modify formula for scale lr * using function to encapsulate scale LR * Add special flag for GPU and samples_per_gpu on config and will be used when scaling LR * Add distributed flag in function scale_lr * Docs add Learning rate automatically scale and Add disable_auto_scale_lr flag * Update doc * Use len(cfg.gpu_ids) to get gpu number when distributed mode * Update doc about Learning rate automatically scale * Using default batch size instead of GPU x Sample per GPU, Add flag in each config which not equal default batch size * Using default_batch_size or mmdet_official_special_batch_size to scale the LR * Add argument --auto-scale-lr to enable auto scale lr. * Update doc about learning rate automatically scale * Use default_batch_size in each config file instead of mmdet_official_special_batch_size * make if branch better * Not set default batch size in dataset config * Default batch size set to 16, Fix some code lint * undo some change * Fix lint problem * Doc add the function of default_batch_size and enable_auto_scale_lr * remove `f` where is no variable. * Updata doc about the default_batch_size setting * Update explanation about default_batch_size * Update explanation about default_batch_size * Add default_batch_size in config files which default batch size is not 16. Update Doc * Update Doc * fix lint : double quoted strings * Improve the naming of some variables * Updata doc about the Learning rate automatically scale * According to configs/cityscapes, set default_batch_size = 8 in config files * Fix some comments according to the review * Fix some doc according to the review * Fix some doc according to the review * Not using assert in the function autoscale_lr * Imporve variable naming : enable_auto_scale_lr to auto_scale_lr * Using world_size to get gpu number when is distributed. * function autoscale_lr rename to auto_scale_lr for unifying * Fix lint problem * Fix lint problem * Add warning message when can not find default_batch_size * Add warning message when can not find default_batch_size * Improve the doc accroding to the review * Add `default_initial_lr` for auto scal lr. Using auto_scale_lr_config dict to contain all setting of auto scale lr * Improve coding style * Improve Doc * Improve doc * Remove a previous `default_batch_size` * Fix doc bug * Add warming message when cfg.optimizer.lr != default_initial_lr * Always use `cfg.optimizer.lr` to calculate the new LR * Fix lint problem * warning message version change to 2.24 * Fix logger info line * Set None when can't find value * Fix num_gpu when using distributed * Improve doc * Add more detail for `optimizer.lr` and `auto_scale_lr_config.auto_scale_lr` * Fix lint problem * Using new config dict `auto_scale_lr` * auto_scale_lr logic using `auto_scale_lr` * Improve doc about new auto_scale_lr * Fix logger string * Improve logger info * Fix lint problem * delete some blank lines * Imporve coding * Add the `auto_scale_lr` to those config files which `sample_per_gpu` is not `2` and number of GPU is not specifying in README at the same time * Add the `auto_scale_lr` to those config files which `sample_per_gpu` is not `2` and number of GPU is not specifying in README at the same time |
3 years ago |
Zhe Chen |
5ef56c174b
|
[Feature] Add pvt and pvtv2 (#5780)
* add pvt * add pvtv2 * Remove redundant codes & use init_cfg * Rename FFN to ConvFFN * rename * move to transformer.py * remove rebundant codes * add PatchEmbed * remove patch size * add doc * add tests * resolve comments * add pad_to_stride * resolve comments * fix bugs * add adap pooling * use adap pooling * fix docstr * add uni test * add more doc * add example * remove patch_to_stride * rename poo * resolve comments * fix doc * refactor patch embed and patch merge and fix pretrain * move padding calculation to a function * change the default value of bias in patchembed * fix some bugs * rename encoder layer * add unittest * fix lint * update pvt-l config * update pvt-l config * add pvt metafile * update pvt metafile * update pvt readme doc * resolve comments Co-authored-by: whai362 <wangwenhai362@163.com> Co-authored-by: zhangshilong <2392587229zsl@gmail.com> Co-authored-by: BIGWangYuDong <yudongwang@tju.edu.cn> |
3 years ago |