[Docs] change tag into comment (#5043)

pull/5058/head
Haian Huang(深度眸) 4 years ago committed by GitHub
parent 2313bd7694
commit 563fb46944
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 2
      configs/albu_example/README.md
  2. 2
      configs/atss/README.md
  3. 2
      configs/autoassign/README.md
  4. 2
      configs/carafe/README.md
  5. 2
      configs/cascade_rcnn/README.md
  6. 2
      configs/cascade_rpn/README.md
  7. 2
      configs/centripetalnet/README.md
  8. 2
      configs/cityscapes/README.md
  9. 2
      configs/cornernet/README.md
  10. 4
      configs/dcn/README.md
  11. 2
      configs/deepfashion/README.md
  12. 2
      configs/detectors/README.md
  13. 2
      configs/detr/README.md
  14. 2
      configs/double_heads/README.md
  15. 2
      configs/dynamic_rcnn/README.md
  16. 2
      configs/empirical_attention/README.md
  17. 2
      configs/fast_rcnn/README.md
  18. 2
      configs/faster_rcnn/README.md
  19. 2
      configs/fcos/README.md
  20. 2
      configs/foveabox/README.md
  21. 2
      configs/fp16/README.md
  22. 2
      configs/free_anchor/README.md
  23. 2
      configs/fsaf/README.md
  24. 2
      configs/gcnet/README.md
  25. 2
      configs/gfl/README.md
  26. 2
      configs/ghm/README.md
  27. 2
      configs/gn+ws/README.md
  28. 2
      configs/gn/README.md
  29. 2
      configs/grid_rcnn/README.md
  30. 2
      configs/groie/README.md
  31. 2
      configs/guided_anchoring/README.md
  32. 2
      configs/hrnet/README.md
  33. 2
      configs/htc/README.md
  34. 2
      configs/instaboost/README.md
  35. 2
      configs/ld/README.md
  36. 2
      configs/legacy_1.x/README.md
  37. 2
      configs/libra_rcnn/README.md
  38. 2
      configs/lvis/README.md
  39. 2
      configs/mask_rcnn/README.md
  40. 2
      configs/ms_rcnn/README.md
  41. 2
      configs/nas_fcos/README.md
  42. 2
      configs/nas_fpn/README.md
  43. 2
      configs/paa/README.md
  44. 2
      configs/pafpn/README.md
  45. 2
      configs/pascal_voc/README.md
  46. 2
      configs/pisa/README.md
  47. 2
      configs/point_rend/README.md
  48. 2
      configs/reppoints/README.md
  49. 2
      configs/res2net/README.md
  50. 2
      configs/retinanet/README.md
  51. 2
      configs/rpn/README.md
  52. 2
      configs/sabl/README.md
  53. 2
      configs/scnet/README.md
  54. 2
      configs/scratch/README.md
  55. 2
      configs/sparse_rcnn/README.md
  56. 2
      configs/ssd/README.md
  57. 2
      configs/tridentnet/README.md
  58. 2
      configs/vfnet/README.md
  59. 2
      configs/wider_face/README.md
  60. 2
      configs/yolact/README.md
  61. 2
      configs/yolo/README.md

@ -1,6 +1,6 @@
# Albu Example
[OTHERS]
<!-- [OTHERS] -->
```
@article{2018arXiv180906839B,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@article{zhang2019bridging,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```
@article{zhu2020autoassign,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
We provide config files to reproduce the object detection & instance segmentation results in the ICCV 2019 Oral paper for [CARAFE: Content-Aware ReAssembly of FEatures](https://arxiv.org/abs/1905.02188).

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@article{Cai_2019,

@ -1,6 +1,6 @@
# Cascade RPN
[ALGORITHM]
<!-- [ALGORITHM] -->
We provide the code for reproducing experiment results of [Cascade RPN](https://arxiv.org/abs/1909.06720).

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@InProceedings{Dong_2020_CVPR,

@ -1,6 +1,6 @@
# Cityscapes Dataset
[DATASET]
<!-- [DATASET] -->
```
@inproceedings{Cordts2016Cityscapes,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@inproceedings{law2018cornernet,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```none
@inproceedings{dai2017deformable,
@ -13,7 +13,7 @@
}
```
[ALGORITHM]
<!-- [ALGORITHM] -->
```
@article{zhu2018deformable,

@ -1,6 +1,6 @@
# DeepFashion
[DATASET]
<!-- [DATASET] -->
[MMFashion](https://github.com/open-mmlab/mmfashion) develops "fashion parsing and segmentation" module
based on the dataset

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
We provide the config files for [DetectoRS: Detecting Objects with Recursive Feature Pyramid and Switchable Atrous Convolution](https://arxiv.org/pdf/2006.02334.pdf).

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
We provide the config files for DETR: [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872).

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@article{wu2019rethinking,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```
@article{DynamicRCNN,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@article{zhu2019empirical,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@inproceedings{girshick2015fast,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@article{Ren_2017,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@article{tian2019fcos,

@ -1,6 +1,6 @@
# FoveaBox: Beyond Anchor-based Object Detector
[ALGORITHM]
<!-- [ALGORITHM] -->
FoveaBox is an accurate, flexible and completely anchor-free object detection system for object detection framework, as presented in our paper [https://arxiv.org/abs/1904.03797](https://arxiv.org/abs/1904.03797):
Different from previous anchor-based methods, FoveaBox directly learns the object existing possibility and the bounding box coordinates without anchor reference. This is achieved by: (a) predicting category-sensitive semantic maps for the object existing possibility, and (b) producing category-agnostic bounding box for each position that potentially contains an object.

@ -2,7 +2,7 @@
## Introduction
[OTHERS]
<!-- [OTHERS] -->
```latex
@article{micikevicius2017mixed,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@inproceedings{zhang2019freeanchor,

@ -1,6 +1,6 @@
# Feature Selective Anchor-Free Module for Single-Shot Object Detection
[ALGORITHM]
<!-- [ALGORITHM] -->
FSAF is an anchor-free method published in CVPR2019 ([https://arxiv.org/pdf/1903.00621.pdf](https://arxiv.org/pdf/1903.00621.pdf)).
Actually it is equivalent to the anchor-based method with only one anchor at each feature map position in each FPN level.

@ -7,7 +7,7 @@ We provide config files to reproduce the results in the paper for
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
**GCNet** is initially described in [arxiv](https://arxiv.org/abs/1904.11492). Via absorbing advantages of Non-Local Networks (NLNet) and Squeeze-Excitation Networks (SENet), GCNet provides a simple, fast and effective approach for global context modeling, which generally outperforms both NLNet and SENet on major benchmarks for various recognition tasks.

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
We provide config files to reproduce the object detection results in the paper [Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection](https://arxiv.org/abs/2006.04388)

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```
@inproceedings{li2019gradient,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```
@article{weightstandardization,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@inproceedings{wu2018group,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@inproceedings{lu2019grid,

@ -11,7 +11,7 @@ on COCO object detection.
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
This paper is motivated by the need to overcome to the limitations of existing
RoI extractors which select only one (the best) layer from FPN.

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
We provide config files to reproduce the results in the CVPR 2019 paper for [Region Proposal by Guided Anchoring](https://arxiv.org/abs/1901.03278).

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@inproceedings{SunXLW19,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
We provide config files to reproduce the results in the CVPR 2019 paper for [Hybrid Task Cascade](https://arxiv.org/abs/1901.07518).

@ -1,6 +1,6 @@
# InstaBoost for MMDetection
[ALGORITHM]
<!-- [ALGORITHM] -->
Configs in this directory is the implementation for ICCV2019 paper "InstaBoost: Boosting Instance Segmentation Via Probability Map Guided Copy-Pasting" and provided by the authors of the paper. InstaBoost is a data augmentation method for object detection and instance segmentation. The paper has been released on [`arXiv`](https://arxiv.org/abs/1908.07801).

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@Article{zheng2021LD,

@ -1,6 +1,6 @@
# Legacy Configs in MMDetection V1.x
[OTHERS]
<!-- [OTHERS] -->
Configs in this directory implement the legacy configs used by MMDetection V1.x and its model zoos.

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
We provide config files to reproduce the results in the CVPR 2019 paper [Libra R-CNN](https://arxiv.org/pdf/1904.02701.pdf).

@ -2,7 +2,7 @@
## Introduction
[DATASET]
<!-- [DATASET] -->
```latex
@inproceedings{gupta2019lvis,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@article{He_2017,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```
@inproceedings{huang2019msrcnn,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@article{wang2019fcos,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@inproceedings{ghiasi2019fpn,

@ -1,6 +1,6 @@
# Probabilistic Anchor Assignment with IoU Prediction for Object Detection
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@inproceedings{paa-eccv2020,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```
@inproceedings{liu2018path,

@ -1,6 +1,6 @@
# PASCAL VOC Dataset
[DATASET]
<!-- [DATASET] -->
```
@Article{Everingham10,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@inproceedings{cao2019prime,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@InProceedings{kirillov2019pointrend,

@ -7,7 +7,7 @@ We provide code support and configuration files to reproduce the results in the
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
**RepPoints**, initially described in [arXiv](https://arxiv.org/abs/1904.11490), is a new representation method for visual objects, on which visual understanding tasks are typically centered. Visual object representation, aiming at both geometric description and appearance feature extraction, is conventionally achieved by `bounding box + RoIPool (RoIAlign)`. The bounding box representation is convenient to use; however, it provides only a rectangular localization of objects that lacks geometric precision and may consequently degrade feature quality. Our new representation, RepPoints, models objects by a `point set` instead of a `bounding box`, which learns to adaptively position themselves over an object in a manner that circumscribes the object’s `spatial extent` and enables `semantically aligned feature extraction`. This richer and more flexible representation maintains the convenience of bounding boxes while facilitating various visual understanding applications. This repo demonstrated the effectiveness of RepPoints for COCO object detection.

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
We propose a novel building block for CNNs, namely Res2Net, by constructing hierarchical residual-like connections within one single residual block. The Res2Net represents multi-scale features at a granular level and increases the range of receptive fields for each network layer.

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@inproceedings{lin2017focal,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@inproceedings{ren2015faster,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
We provide config files to reproduce the object detection results in the ECCV 2020 Spotlight paper for [Side-Aware Boundary Localization for More Precise Object Detection](https://arxiv.org/abs/1912.04260).

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
We provide the code for reproducing experiment results of [SCNet](https://arxiv.org/abs/2012.10150).

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@article{he2018rethinking,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```
@article{peize2020sparse,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@article{Liu_2016,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```
@InProceedings{li2019scale,

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
**VarifocalNet (VFNet)** learns to predict the IoU-aware classification score which mixes the object presence confidence and localization accuracy together as the detection score for a bounding box. The learning is supervised by the proposed Varifocal Loss (VFL), based on a new star-shaped bounding box feature representation (the features at nine yellow sampling points). Given the new representation, the object localization accuracy is further improved by refining the initially regressed bounding box. The full paper is available at: [https://arxiv.org/abs/2008.13367](https://arxiv.org/abs/2008.13367).

@ -1,6 +1,6 @@
# WIDER Face Dataset
[DATASET]
<!-- [DATASET] -->
To use the WIDER Face dataset you need to download it
and extract to the `data/WIDERFace` folder. Annotation in the VOC format

@ -1,6 +1,6 @@
# **Y**ou **O**nly **L**ook **A**t **C**oefficien**T**s
[ALGORITHM]
<!-- [ALGORITHM] -->
```
██╗ ██╗ ██████╗ ██╗ █████╗ ██████╗████████╗

@ -2,7 +2,7 @@
## Introduction
[ALGORITHM]
<!-- [ALGORITHM] -->
```latex
@misc{redmon2018yolov3,

Loading…
Cancel
Save