Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
Shilong Liu ab07054eb7 init 2 years ago
.asset init 2 years ago
README.md init 2 years ago

README.md

Grounding DINO

PWC
PWC
PWC
PWC

Official pytorch implementation of Grounding DINO. Code will be available soon!

Highlight

  • SOTA Closed-Set Detection Model DINO => SOTA Open-Set Detection Model Grounding DINO
  • Pure Transformer-based.
  • COCO zero-shot 52.5 AP (training without COCO data!). COCO fine-tune 63.0 AP.

hero_figure

Model

Includes: a text backbone, an image backbone, a feature enhancer, a language-guided query selection, and a cross-modality decoder.

arch

Links

Our model is related to DINO and GLIP. Thanks for their great work!

We also thank great previous work including DETR, Deformable DETR, SMCA, Conditional DETR, Anchor DETR, Dynamic DETR, DAB-DETR, DN-DETR, etc. More related work are available at Awesome Detection Transformer. A new toolbox detrex is available as well.

Bibtex

If you find our work helpful for your research, please consider citing the following BibTeX entry.

@inproceedings{ShilongLiu2023GroundingDM,
  title={Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection},
  author={Shilong Liu and Zhaoyang Zeng and Tianhe Ren and Feng Li and Hao Zhang and Jie Yang and Chunyuan Li and Jianwei Yang and Hang Su and Jun Zhu and Lei Zhang},
  year={2023}
}