From 62094bd03fc6d14726a8362ccb7e2872fe6b7909 Mon Sep 17 00:00:00 2001 From: Jan Knobloch <116908874+jk4e@users.noreply.github.com> Date: Thu, 22 Aug 2024 19:56:12 +0200 Subject: [PATCH] Improve Docs dataset layout issues (#15696) Co-authored-by: Francesco Mattioli Co-authored-by: Glenn Jocher --- docs/en/datasets/classify/cifar10.md | 20 +++++++++------- docs/en/datasets/classify/imagewoof.md | 29 +++++++++++++++------- docs/en/datasets/detect/roboflow-100.md | 2 +- docs/en/datasets/detect/visdrone.md | 29 ++++++++++++---------- docs/en/datasets/segment/crack-seg.md | 32 ++++++++++++++----------- docs/en/datasets/segment/package-seg.md | 30 +++++++++++++---------- 6 files changed, 84 insertions(+), 58 deletions(-) diff --git a/docs/en/datasets/classify/cifar10.md b/docs/en/datasets/classify/cifar10.md index b4742cbcb..54f9e9c2d 100644 --- a/docs/en/datasets/classify/cifar10.md +++ b/docs/en/datasets/classify/cifar10.md @@ -153,14 +153,18 @@ Each subset comprises images categorized into 10 classes, with their annotations If you use the CIFAR-10 dataset in your research or development projects, make sure to cite the following paper: -```bibtex -@TECHREPORT{Krizhevsky09learningmultiple, - author={Alex Krizhevsky}, - title={Learning multiple layers of features from tiny images}, - institution={}, - year={2009} -} -``` +!!! Quote "" + + === "BibTeX" + + ```bibtex + @TECHREPORT{Krizhevsky09learningmultiple, + author={Alex Krizhevsky}, + title={Learning multiple layers of features from tiny images}, + institution={}, + year={2009} + } + ``` Acknowledging the dataset's creators helps support continued research and development in the field. For more details, see the [citations and acknowledgments](#citations-and-acknowledgments) section. diff --git a/docs/en/datasets/classify/imagewoof.md b/docs/en/datasets/classify/imagewoof.md index 5a76d97fc..e6668dfcb 100644 --- a/docs/en/datasets/classify/imagewoof.md +++ b/docs/en/datasets/classify/imagewoof.md @@ -59,18 +59,29 @@ ImageWoof dataset comes in three different sizes to accommodate various research To use these variants in your training, simply replace 'imagewoof' in the dataset argument with 'imagewoof320' or 'imagewoof160'. For example: -```python -from ultralytics import YOLO +!!! Example "Example" -# Load a model -model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training) + === "Python" + + ```python + from ultralytics import YOLO + + # Load a model + model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training) -# For medium-sized dataset -model.train(data="imagewoof320", epochs=100, imgsz=224) + # For medium-sized dataset + model.train(data="imagewoof320", epochs=100, imgsz=224) -# For small-sized dataset -model.train(data="imagewoof160", epochs=100, imgsz=224) -``` + # For small-sized dataset + model.train(data="imagewoof160", epochs=100, imgsz=224) + ``` + + === "CLI" + + ```bash + # Load a pretrained model and train on the small-sized dataset + yolo classify train model=yolov8n-cls.pt data=imagewoof320 epochs=100 imgsz=224 + ``` It's important to note that using smaller images will likely yield lower performance in terms of classification accuracy. However, it's an excellent way to iterate quickly in the early stages of model development and prototyping. diff --git a/docs/en/datasets/detect/roboflow-100.md b/docs/en/datasets/detect/roboflow-100.md index d8c61c37c..541f8e8e5 100644 --- a/docs/en/datasets/detect/roboflow-100.md +++ b/docs/en/datasets/detect/roboflow-100.md @@ -203,7 +203,7 @@ The **Roboflow 100** dataset is accessible on [GitHub](https://github.com/robofl When using the Roboflow 100 dataset in your research, ensure to properly cite it. Here is the recommended citation: -!!! Quote +!!! Quote "" === "BibTeX" diff --git a/docs/en/datasets/detect/visdrone.md b/docs/en/datasets/detect/visdrone.md index 00d84b10e..fab2fe80f 100644 --- a/docs/en/datasets/detect/visdrone.md +++ b/docs/en/datasets/detect/visdrone.md @@ -159,16 +159,19 @@ The configuration file for the VisDrone dataset, `VisDrone.yaml`, can be found i If you use the VisDrone dataset in your research or development work, please cite the following paper: -!!! Quote "BibTeX" - - ```bibtex - @ARTICLE{9573394, - author={Zhu, Pengfei and Wen, Longyin and Du, Dawei and Bian, Xiao and Fan, Heng and Hu, Qinghua and Ling, Haibin}, - journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, - title={Detection and Tracking Meet Drones Challenge}, - year={2021}, - volume={}, - number={}, - pages={1-1}, - doi={10.1109/TPAMI.2021.3119563}} - ``` +!!! Quote "" + + === "BibTeX" + + ```bibtex + @ARTICLE{9573394, + author={Zhu, Pengfei and Wen, Longyin and Du, Dawei and Bian, Xiao and Fan, Heng and Hu, Qinghua and Ling, Haibin}, + journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, + title={Detection and Tracking Meet Drones Challenge}, + year={2021}, + volume={}, + number={}, + pages={1-1}, + doi={10.1109/TPAMI.2021.3119563} + } + ``` diff --git a/docs/en/datasets/segment/crack-seg.md b/docs/en/datasets/segment/crack-seg.md index 2dd897263..83f019871 100644 --- a/docs/en/datasets/segment/crack-seg.md +++ b/docs/en/datasets/segment/crack-seg.md @@ -135,19 +135,23 @@ Ultralytics YOLO offers advanced real-time object detection, segmentation, and c If you incorporate the Crack Segmentation Dataset into your research, please use the following BibTeX reference: -```bibtex -@misc{ crack-bphdr_dataset, - title = { crack Dataset }, - type = { Open Source Dataset }, - author = { University }, - howpublished = { \url{ https://universe.roboflow.com/university-bswxt/crack-bphdr } }, - url = { https://universe.roboflow.com/university-bswxt/crack-bphdr }, - journal = { Roboflow Universe }, - publisher = { Roboflow }, - year = { 2022 }, - month = { dec }, - note = { visited on 2024-01-23 }, -} -``` +!!! Quote "" + + === "BibTeX" + + ```bibtex + @misc{ crack-bphdr_dataset, + title = { crack Dataset }, + type = { Open Source Dataset }, + author = { University }, + howpublished = { \url{ https://universe.roboflow.com/university-bswxt/crack-bphdr } }, + url = { https://universe.roboflow.com/university-bswxt/crack-bphdr }, + journal = { Roboflow Universe }, + publisher = { Roboflow }, + year = { 2022 }, + month = { dec }, + note = { visited on 2024-01-23 }, + } + ``` This citation format ensures proper accreditation to the creators of the dataset and acknowledges its use in your research. diff --git a/docs/en/datasets/segment/package-seg.md b/docs/en/datasets/segment/package-seg.md index af4f90a51..86fad9e9b 100644 --- a/docs/en/datasets/segment/package-seg.md +++ b/docs/en/datasets/segment/package-seg.md @@ -99,24 +99,28 @@ The [Roboflow Package Segmentation Dataset](https://universe.roboflow.com/factor ### How do I train an Ultralytics YOLOv8 model on the Package Segmentation Dataset? -You can train an Ultralytics YOLOv8n model using both Python and CLI methods. For Python, use the snippet below: +You can train an Ultralytics YOLOv8n model using both Python and CLI methods. Use the snippets below: -```python -from ultralytics import YOLO +!!! Example "Train Example" -# Load a model -model = YOLO("yolov8n-seg.pt") # load a pretrained model + === "Python" + + ```python + from ultralytics import YOLO -# Train the model -results = model.train(data="package-seg.yaml", epochs=100, imgsz=640) -``` + # Load a model + model = YOLO("yolov8n-seg.pt") # load a pretrained model -For CLI: + # Train the model + results = model.train(data="package-seg.yaml", epochs=100, imgsz=640) + ``` -```bash -# Start training from a pretrained *.pt model -yolo segment train data=package-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640 -``` + === "CLI" + + ```bash + # Start training from a pretrained *.pt model + yolo segment train data=package-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640 + ``` Refer to the model [Training](../../modes/train.md) page for more details.