Improve Docs dataset layout issues (#15696)

Co-authored-by: Francesco Mattioli <Francesco.mttl@gmail.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
pull/15765/head
Jan Knobloch 3 months ago committed by GitHub
parent 90be5f7266
commit 62094bd03f
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 4
      docs/en/datasets/classify/cifar10.md
  2. 11
      docs/en/datasets/classify/imagewoof.md
  3. 2
      docs/en/datasets/detect/roboflow-100.md
  4. 7
      docs/en/datasets/detect/visdrone.md
  5. 4
      docs/en/datasets/segment/crack-seg.md
  6. 8
      docs/en/datasets/segment/package-seg.md

@ -153,6 +153,10 @@ Each subset comprises images categorized into 10 classes, with their annotations
If you use the CIFAR-10 dataset in your research or development projects, make sure to cite the following paper:
!!! Quote ""
=== "BibTeX"
```bibtex
@TECHREPORT{Krizhevsky09learningmultiple,
author={Alex Krizhevsky},

@ -59,6 +59,10 @@ ImageWoof dataset comes in three different sizes to accommodate various research
To use these variants in your training, simply replace 'imagewoof' in the dataset argument with 'imagewoof320' or 'imagewoof160'. For example:
!!! Example "Example"
=== "Python"
```python
from ultralytics import YOLO
@ -72,6 +76,13 @@ model.train(data="imagewoof320", epochs=100, imgsz=224)
model.train(data="imagewoof160", epochs=100, imgsz=224)
```
=== "CLI"
```bash
# Load a pretrained model and train on the small-sized dataset
yolo classify train model=yolov8n-cls.pt data=imagewoof320 epochs=100 imgsz=224
```
It's important to note that using smaller images will likely yield lower performance in terms of classification accuracy. However, it's an excellent way to iterate quickly in the early stages of model development and prototyping.
## Sample Images and Annotations

@ -203,7 +203,7 @@ The **Roboflow 100** dataset is accessible on [GitHub](https://github.com/robofl
When using the Roboflow 100 dataset in your research, ensure to properly cite it. Here is the recommended citation:
!!! Quote
!!! Quote ""
=== "BibTeX"

@ -159,7 +159,9 @@ The configuration file for the VisDrone dataset, `VisDrone.yaml`, can be found i
If you use the VisDrone dataset in your research or development work, please cite the following paper:
!!! Quote "BibTeX"
!!! Quote ""
=== "BibTeX"
```bibtex
@ARTICLE{9573394,
@ -170,5 +172,6 @@ If you use the VisDrone dataset in your research or development work, please cit
volume={},
number={},
pages={1-1},
doi={10.1109/TPAMI.2021.3119563}}
doi={10.1109/TPAMI.2021.3119563}
}
```

@ -135,6 +135,10 @@ Ultralytics YOLO offers advanced real-time object detection, segmentation, and c
If you incorporate the Crack Segmentation Dataset into your research, please use the following BibTeX reference:
!!! Quote ""
=== "BibTeX"
```bibtex
@misc{ crack-bphdr_dataset,
title = { crack Dataset },

@ -99,7 +99,11 @@ The [Roboflow Package Segmentation Dataset](https://universe.roboflow.com/factor
### How do I train an Ultralytics YOLOv8 model on the Package Segmentation Dataset?
You can train an Ultralytics YOLOv8n model using both Python and CLI methods. For Python, use the snippet below:
You can train an Ultralytics YOLOv8n model using both Python and CLI methods. Use the snippets below:
!!! Example "Train Example"
=== "Python"
```python
from ultralytics import YOLO
@ -111,7 +115,7 @@ model = YOLO("yolov8n-seg.pt") # load a pretrained model
results = model.train(data="package-seg.yaml", epochs=100, imgsz=640)
```
For CLI:
=== "CLI"
```bash
# Start training from a pretrained *.pt model

Loading…
Cancel
Save