--- comments: true description: Discover Ultralytics YOLO - the latest in real-time object detection and image segmentation. Learn its features and maximize its potential in your projects. keywords: Ultralytics, YOLO, YOLO11, object detection, image segmentation, deep learning, computer vision, AI, machine learning, documentation, tutorial ---
Ultralytics YOLO banner 中文 | 한국어 | 日本語 | Русский | Deutsch | Français | Español | Português | Türkçe | Tiếng Việt | العربية

Ultralytics CI Ultralytics Downloads Ultralytics YOLO Citation Ultralytics Discord Ultralytics Forums Ultralytics Reddit
Run Ultralytics on Gradient Open Ultralytics In Colab Open Ultralytics In Kaggle Open Ultralytics In Binder
Introducing [Ultralytics](https://www.ultralytics.com/) [YOLO11](https://github.com/ultralytics/ultralytics), the latest version of the acclaimed real-time object detection and image segmentation model. YOLO11 is built on cutting-edge advancements in [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) and [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv), offering unparalleled performance in terms of speed and [accuracy](https://www.ultralytics.com/glossary/accuracy). Its streamlined design makes it suitable for various applications and easily adaptable to different hardware platforms, from edge devices to cloud APIs. Explore the Ultralytics Docs, a comprehensive resource designed to help you understand and utilize its features and capabilities. Whether you are a seasoned [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) practitioner or new to the field, this hub aims to maximize YOLO's potential in your projects

Ultralytics GitHub space Ultralytics LinkedIn space Ultralytics Twitter space Ultralytics YouTube space Ultralytics TikTok space Ultralytics BiliBili space Ultralytics Discord
## Where to Start
- :material-clock-fast:{ .lg .middle }   **Getting Started** *** Install `ultralytics` with pip and get up and running in minutes to train a YOLO model *** [:octicons-arrow-right-24: Quickstart](quickstart.md) - :material-image:{ .lg .middle }   **Predict** *** Predict on new images, videos and streams with YOLO
  *** [:octicons-arrow-right-24: Learn more](modes/predict.md) - :fontawesome-solid-brain:{ .lg .middle }   **Train a Model** *** Train a new YOLO model on your own custom dataset from scratch or load and train on a pretrained model *** [:octicons-arrow-right-24: Learn more](modes/train.md) - :material-magnify-expand:{ .lg .middle }   **Explore Tasks** *** Discover YOLO tasks like detect, segment, classify, pose, OBB and track
  *** [:octicons-arrow-right-24: Explore Tasks](tasks/index.md) - :rocket:{ .lg .middle }   **Explore YOLO11 NEW** *** Discover Ultralytics' latest state-of-the-art YOLO11 models and their capabilities
  *** [:octicons-arrow-right-24: YOLO11 Models 🚀 NEW](models/yolo11.md) - :material-scale-balance:{ .lg .middle }   **Open Source, AGPL-3.0** *** Ultralytics offers two licensing options for YOLO: AGPL-3.0 License and Enterprise License. Ultralytics is available on [GitHub](https://github.com/ultralytics/ultralytics) *** [:octicons-arrow-right-24: License](https://www.ultralytics.com/license)



Watch: How to Train a YOLO model on Your Custom Dataset in Google Colab.

## YOLO: A Brief History [YOLO](https://arxiv.org/abs/1506.02640) (You Only Look Once), a popular [object detection](https://www.ultralytics.com/glossary/object-detection) and [image segmentation](https://www.ultralytics.com/glossary/image-segmentation) model, was developed by Joseph Redmon and Ali Farhadi at the University of Washington. Launched in 2015, YOLO quickly gained popularity for its high speed and accuracy. - [YOLOv2](https://arxiv.org/abs/1612.08242), released in 2016, improved the original model by incorporating batch normalization, anchor boxes, and dimension clusters. - [YOLOv3](https://pjreddie.com/media/files/papers/YOLOv3.pdf), launched in 2018, further enhanced the model's performance using a more efficient backbone network, multiple anchors and spatial pyramid pooling. - [YOLOv4](https://arxiv.org/abs/2004.10934) was released in 2020, introducing innovations like Mosaic [data augmentation](https://www.ultralytics.com/glossary/data-augmentation), a new anchor-free detection head, and a new [loss function](https://www.ultralytics.com/glossary/loss-function). - [YOLOv5](https://github.com/ultralytics/yolov5) further improved the model's performance and added new features such as hyperparameter optimization, integrated experiment tracking and automatic export to popular export formats. - [YOLOv6](https://github.com/meituan/YOLOv6) was open-sourced by [Meituan](https://about.meituan.com/) in 2022 and is in use in many of the company's autonomous delivery robots. - [YOLOv7](https://github.com/WongKinYiu/yolov7) added additional tasks such as pose estimation on the COCO keypoints dataset. - [YOLOv8](https://github.com/ultralytics/ultralytics) released in 2023 by Ultralytics. YOLOv8 introduced new features and improvements for enhanced performance, flexibility, and efficiency, supporting a full range of vision AI tasks, - [YOLOv9](models/yolov9.md) introduces innovative methods like Programmable Gradient Information (PGI) and the Generalized Efficient Layer Aggregation Network (GELAN). - [YOLOv10](models/yolov10.md) is created by researchers from [Tsinghua University](https://www.tsinghua.edu.cn/en/) using the [Ultralytics](https://www.ultralytics.com/) [Python package](https://pypi.org/project/ultralytics/). This version provides real-time [object detection](tasks/detect.md) advancements by introducing an End-to-End head that eliminates Non-Maximum Suppression (NMS) requirements. - **[YOLO11](models/yolo11.md) 🚀 NEW**: Ultralytics' latest YOLO models delivering state-of-the-art (SOTA) performance across multiple tasks, including [detection](tasks/detect.md), [segmentation](tasks/segment.md), [pose estimation](tasks/pose.md), [tracking](modes/track.md), and [classification](tasks/classify.md), leverage capabilities across diverse AI applications and domains. ## YOLO Licenses: How is Ultralytics YOLO licensed? Ultralytics offers two licensing options to accommodate diverse use cases: - **AGPL-3.0 License**: This [OSI-approved](https://opensource.org/license) open-source license is ideal for students and enthusiasts, promoting open collaboration and knowledge sharing. See the [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) file for more details. - **Enterprise License**: Designed for commercial use, this license permits seamless integration of Ultralytics software and AI models into commercial goods and services, bypassing the open-source requirements of AGPL-3.0. If your scenario involves embedding our solutions into a commercial offering, reach out through [Ultralytics Licensing](https://www.ultralytics.com/license). Our licensing strategy is designed to ensure that any improvements to our open-source projects are returned to the community. We hold the principles of open source close to our hearts ❤️, and our mission is to guarantee that our contributions can be utilized and expanded upon in ways that are beneficial to all. ## FAQ ### What is Ultralytics YOLO and how does it improve object detection? Ultralytics YOLO is the latest advancement in the acclaimed YOLO (You Only Look Once) series for real-time object detection and image segmentation. It builds on previous versions by introducing new features and improvements for enhanced performance, flexibility, and efficiency. YOLO supports various [vision AI tasks](tasks/index.md) such as detection, segmentation, pose estimation, tracking, and classification. Its state-of-the-art architecture ensures superior speed and accuracy, making it suitable for diverse applications, including edge devices and cloud APIs. ### How can I get started with YOLO installation and setup? Getting started with YOLO is quick and straightforward. You can install the Ultralytics package using [pip](https://pypi.org/project/ultralytics/) and get up and running in minutes. Here's a basic installation command: !!! example "Installation using pip" === "CLI" ```bash pip install ultralytics ``` For a comprehensive step-by-step guide, visit our [quickstart guide](quickstart.md). This resource will help you with installation instructions, initial setup, and running your first model. ### How can I train a custom YOLO model on my dataset? Training a custom YOLO model on your dataset involves a few detailed steps: 1. Prepare your annotated dataset. 2. Configure the training parameters in a YAML file. 3. Use the `yolo TASK train` command to start training. (Each `TASK` has its own argument) Here's example code for the Object Detection Task: !!! example "Train Example for Object Detection Task" === "Python" ```python from ultralytics import YOLO # Load a pre-trained YOLO model (you can choose n, s, m, l, or x versions) model = YOLO("yolo11n.pt") # Start training on your custom dataset model.train(data="path/to/dataset.yaml", epochs=100, imgsz=640) ``` === "CLI" ```bash # Train a YOLO model from the command line yolo detect train data=path/to/dataset.yaml epochs=100 imgsz=640 ``` For a detailed walkthrough, check out our [Train a Model](modes/train.md) guide, which includes examples and tips for optimizing your training process. ### What are the licensing options available for Ultralytics YOLO? Ultralytics offers two licensing options for YOLO: - **AGPL-3.0 License**: This open-source license is ideal for educational and non-commercial use, promoting open collaboration. - **Enterprise License**: This is designed for commercial applications, allowing seamless integration of Ultralytics software into commercial products without the restrictions of the AGPL-3.0 license. For more details, visit our [Licensing](https://www.ultralytics.com/license) page. ### How can Ultralytics YOLO be used for real-time object tracking? Ultralytics YOLO supports efficient and customizable multi-object tracking. To utilize tracking capabilities, you can use the `yolo track` command as shown below: !!! example "Example for Object Tracking on a Video" === "Python" ```python from ultralytics import YOLO # Load a pre-trained YOLO model model = YOLO("yolo11n.pt") # Start tracking objects in a video # You can also use live video streams or webcam input model.track(source="path/to/video.mp4") ``` === "CLI" ```bash # Perform object tracking on a video from the command line # You can specify different sources like webcam (0) or RTSP streams yolo track source=path/to/video.mp4 ``` For a detailed guide on setting up and running object tracking, check our [tracking mode](modes/track.md) documentation, which explains the configuration and practical applications in real-time scenarios.