OpenMMLab Detection Toolbox and Benchmark https://mmdetection.readthedocs.io/
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

19 lines
668 B

_base_ = 'retinanet_pvtv2-b0_fpn_1x_coco.py'
model = dict(
backbone=dict(
embed_dims=64,
num_layers=[3, 8, 27, 3],
init_cfg=dict(checkpoint='https://github.com/whai362/PVT/'
'releases/download/v2/pvt_v2_b4.pth')),
neck=dict(in_channels=[64, 128, 320, 512]))
# optimizer
optimizer = dict(
_delete_=True, type='AdamW', lr=0.0001 / 1.4, weight_decay=0.0001)
# dataset settings
data = dict(samples_per_gpu=1, workers_per_gpu=1)
[Feature] Support automatically scaling LR according to GPU number and samples per GPU (#7482) * Add default_gpu_number flag in config : default_runtime.py * Support automatically scaling LR according to GPU number * Improve code comments * Improve code comments * Imporve variable naming * Improve log message for scale lr, add print samples_per_gpu * Modify formula for scale lr * using function to encapsulate scale LR * Add special flag for GPU and samples_per_gpu on config and will be used when scaling LR * Add distributed flag in function scale_lr * Docs add Learning rate automatically scale and Add disable_auto_scale_lr flag * Update doc * Use len(cfg.gpu_ids) to get gpu number when distributed mode * Update doc about Learning rate automatically scale * Using default batch size instead of GPU x Sample per GPU, Add flag in each config which not equal default batch size * Using default_batch_size or mmdet_official_special_batch_size to scale the LR * Add argument --auto-scale-lr to enable auto scale lr. * Update doc about learning rate automatically scale * Use default_batch_size in each config file instead of mmdet_official_special_batch_size * make if branch better * Not set default batch size in dataset config * Default batch size set to 16, Fix some code lint * undo some change * Fix lint problem * Doc add the function of default_batch_size and enable_auto_scale_lr * remove `f` where is no variable. * Updata doc about the default_batch_size setting * Update explanation about default_batch_size * Update explanation about default_batch_size * Add default_batch_size in config files which default batch size is not 16. Update Doc * Update Doc * fix lint : double quoted strings * Improve the naming of some variables * Updata doc about the Learning rate automatically scale * According to configs/cityscapes, set default_batch_size = 8 in config files * Fix some comments according to the review * Fix some doc according to the review * Fix some doc according to the review * Not using assert in the function autoscale_lr * Imporve variable naming : enable_auto_scale_lr to auto_scale_lr * Using world_size to get gpu number when is distributed. * function autoscale_lr rename to auto_scale_lr for unifying * Fix lint problem * Fix lint problem * Add warning message when can not find default_batch_size * Add warning message when can not find default_batch_size * Improve the doc accroding to the review * Add `default_initial_lr` for auto scal lr. Using auto_scale_lr_config dict to contain all setting of auto scale lr * Improve coding style * Improve Doc * Improve doc * Remove a previous `default_batch_size` * Fix doc bug * Add warming message when cfg.optimizer.lr != default_initial_lr * Always use `cfg.optimizer.lr` to calculate the new LR * Fix lint problem * warning message version change to 2.24 * Fix logger info line * Set None when can't find value * Fix num_gpu when using distributed * Improve doc * Add more detail for `optimizer.lr` and `auto_scale_lr_config.auto_scale_lr` * Fix lint problem * Using new config dict `auto_scale_lr` * auto_scale_lr logic using `auto_scale_lr` * Improve doc about new auto_scale_lr * Fix logger string * Improve logger info * Fix lint problem * delete some blank lines * Imporve coding * Add the `auto_scale_lr` to those config files which `sample_per_gpu` is not `2` and number of GPU is not specifying in README at the same time * Add the `auto_scale_lr` to those config files which `sample_per_gpu` is not `2` and number of GPU is not specifying in README at the same time
3 years ago
# NOTE: `auto_scale_lr` is for automatically scaling LR,
# USER SHOULD NOT CHANGE ITS VALUES.
# base_batch_size = (8 GPUs) x (1 samples per GPU)
auto_scale_lr = dict(base_batch_size=8)