* Add default_gpu_number flag in config : default_runtime.py
* Support automatically scaling LR according to GPU number
* Improve code comments
* Improve code comments
* Imporve variable naming
* Improve log message for scale lr, add print samples_per_gpu
* Modify formula for scale lr
* using function to encapsulate scale LR
* Add special flag for GPU and samples_per_gpu on config and will be used when scaling LR
* Add distributed flag in function scale_lr
* Docs add Learning rate automatically scale and Add disable_auto_scale_lr flag
* Update doc
* Use len(cfg.gpu_ids) to get gpu number when distributed mode
* Update doc about Learning rate automatically scale
* Using default batch size instead of GPU x Sample per GPU, Add flag in each config which not equal default batch size
* Using default_batch_size or mmdet_official_special_batch_size to scale the LR
* Add argument --auto-scale-lr to enable auto scale lr.
* Update doc about learning rate automatically scale
* Use default_batch_size in each config file instead of mmdet_official_special_batch_size
* make if branch better
* Not set default batch size in dataset config
* Default batch size set to 16, Fix some code lint
* undo some change
* Fix lint problem
* Doc add the function of default_batch_size and enable_auto_scale_lr
* remove `f` where is no variable.
* Updata doc about the default_batch_size setting
* Update explanation about default_batch_size
* Update explanation about default_batch_size
* Add default_batch_size in config files which default batch size is not 16. Update Doc
* Update Doc
* fix lint : double quoted strings
* Improve the naming of some variables
* Updata doc about the Learning rate automatically scale
* According to configs/cityscapes, set default_batch_size = 8 in config files
* Fix some comments according to the review
* Fix some doc according to the review
* Fix some doc according to the review
* Not using assert in the function autoscale_lr
* Imporve variable naming : enable_auto_scale_lr to auto_scale_lr
* Using world_size to get gpu number when is distributed.
* function autoscale_lr rename to auto_scale_lr for unifying
* Fix lint problem
* Fix lint problem
* Add warning message when can not find default_batch_size
* Add warning message when can not find default_batch_size
* Improve the doc accroding to the review
* Add `default_initial_lr` for auto scal lr. Using auto_scale_lr_config dict to contain all setting of auto scale lr
* Improve coding style
* Improve Doc
* Improve doc
* Remove a previous `default_batch_size`
* Fix doc bug
* Add warming message when cfg.optimizer.lr != default_initial_lr
* Always use `cfg.optimizer.lr` to calculate the new LR
* Fix lint problem
* warning message version change to 2.24
* Fix logger info line
* Set None when can't find value
* Fix num_gpu when using distributed
* Improve doc
* Add more detail for `optimizer.lr` and `auto_scale_lr_config.auto_scale_lr`
* Fix lint problem
* Using new config dict `auto_scale_lr`
* auto_scale_lr logic using `auto_scale_lr`
* Improve doc about new auto_scale_lr
* Fix logger string
* Improve logger info
* Fix lint problem
* delete some blank lines
* Imporve coding
* Add the `auto_scale_lr` to those config files which `sample_per_gpu` is not `2` and number of GPU is not specifying in README at the same time
* Add the `auto_scale_lr` to those config files which `sample_per_gpu` is not `2` and number of GPU is not specifying in README at the same time