Description: Learn how to deploy YOLOv8 models on Amazon SageMaker Endpoints. This guide covers the essentials of AWS environment setup, model preparation, and deployment using AWS CloudFormation and the AWS Cloud Development Kit (CDK).
# A Guide to Deploying YOLOv8 on Amazon SageMaker Endpoints
Deploying advanced computer vision models like [Ultralytics’ YOLOv8](https://github.com/ultralytics/ultralytics) on Amazon SageMaker Endpoints opens up a wide range of possibilities for various machine learning applications. The key to effectively using these models lies in understanding their setup, configuration, and deployment processes. YOLOv8 becomes even more powerful when integrated seamlessly with Amazon SageMaker, a robust and scalable machine learning service by AWS.
This guide will take you through the process of deploying YOLOv8 PyTorch models on Amazon SageMaker Endpoints step by step. You'll learn the essentials of preparing your AWS environment, configuring the model appropriately, and using tools like AWS CloudFormation and the AWS Cloud Development Kit (CDK) for deployment.
[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a machine learning service from Amazon Web Services (AWS) that simplifies the process of building, training, and deploying machine learning models. It provides a broad range of tools for handling various aspects of machine learning workflows. This includes automated features for tuning models, options for training models at scale, and straightforward methods for deploying models into production. SageMaker supports popular machine learning frameworks, offering the flexibility needed for diverse projects. Its features also cover data labeling, workflow management, and performance analysis.
## Deploying YOLOv8 on Amazon SageMaker Endpoints
Deploying YOLOv8 on Amazon SageMaker lets you use its managed environment for real-time inference and take advantage of features like autoscaling. Take a look at the AWS architecture below.
First, ensure you have the following prerequisites in place:
- An AWS Account: If you don't already have one, sign up for an AWS account.
- Configured IAM Roles: You’ll need an IAM role with the necessary permissions for Amazon SageMaker, AWS CloudFormation, and Amazon S3. This role should have policies that allow it to access these services.
- AWS CLI: If not already installed, download and install the AWS Command Line Interface (CLI) and configure it with your account details. Follow [the AWS CLI instructions](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) for installation.
- AWS CDK: If not already installed, install the AWS Cloud Development Kit (CDK), which will be used for scripting the deployment. Follow [the AWS CDK instructions](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install) for installation.
- Adequate Service Quota: Confirm that you have sufficient quotas for two separate resources in Amazon SageMaker: one for ml.m5.4xlarge for endpoint usage and another for ml.m5.4xlarge for notebook instance usage. Each of these requires a minimum of one quota value. If your current quotas are below this requirement, it's important to request an increase for each. You can request a quota increase by following the detailed instructions in the [AWS Service Quotas documentation](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html#quota-console-increase).
### Step 2: Clone the YOLOv8 SageMaker Repository
The next step is to clone the specific AWS repository that contains the resources for deploying YOLOv8 on SageMaker. This repository, hosted on GitHub, includes the necessary CDK scripts and configuration files.
- Clone the GitHub Repository: Execute the following command in your terminal to clone the host-yolov8-on-sagemaker-endpoint repository:
- Navigate to the Cloned Directory: Change your directory to the cloned repository:
```bash
cd host-yolov8-on-sagemaker-endpoint/yolov8-pytorch-cdk
```
### Step 3: Set Up the CDK Environment
Now that you have the necessary code, set up your environment for deploying with AWS CDK.
- Create a Python Virtual Environment: This isolates your Python environment and dependencies. Run:
```bash
python3 -m venv .venv
```
- Activate the Virtual Environment:
```bash
source .venv/bin/activate
```
- Install Dependencies: Install the required Python dependencies for the project:
```bash
pip3 install -r requirements.txt
```
- Upgrade AWS CDK Library: Ensure you have the latest version of the AWS CDK library:
```bash
pip install --upgrade aws-cdk-lib
```
### Step 4: Create the AWS CloudFormation Stack
- Synthesize the CDK Application: Generate the AWS CloudFormation template from your CDK code:
```bash
cdk synth
```
- Bootstrap the CDK Application: Prepare your AWS environment for CDK deployment:
```bash
cdk bootstrap
```
- Deploy the Stack: This will create the necessary AWS resources and deploy your model:
```bash
cdk deploy
```
### Step 5: Deploy the YOLOv8 Model
Before diving into the deployment instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
After creating the AWS CloudFormation Stack, the next step is to deploy YOLOv8.
- Open the Notebook Instance: Go to the AWS Console and navigate to the Amazon SageMaker service. Select "Notebook Instances" from the dashboard, then locate the notebook instance that was created by your CDK deployment script. Open the notebook instance to access the Jupyter environment.
- Access and Modify inference.py: After opening the SageMaker notebook instance in Jupyter, locate the inference.py file. Edit the output_fn function in inference.py as shown below and save your changes to the script, ensuring that there are no syntax errors.
```python
def output_fn(prediction_output, content_type):
print("Executing output_fn from inference.py ...")
infer = {}
for result in prediction_output:
if 'boxes' in result._keys and result.boxes is not None:
- Deploy the Endpoint Using 1_DeployEndpoint.ipynb: In the Jupyter environment, open the 1_DeployEndpoint.ipynb notebook located in the sm-notebook directory. Follow the instructions in the notebook and run the cells to download the YOLOv8 model, package it with the updated inference code, and upload it to an Amazon S3 bucket. The notebook will guide you through creating and deploying a SageMaker endpoint for the YOLOv8 model.
### Step 6: Testing Your Deployment
Now that your YOLOv8 model is deployed, it's important to test its performance and functionality.
- Open the Test Notebook: In the same Jupyter environment, locate and open the 2_TestEndpoint.ipynb notebook, also in the sm-notebook directory.
- Run the Test Notebook: Follow the instructions within the notebook to test the deployed SageMaker endpoint. This includes sending an image to the endpoint and running inferences. Then, you’ll plot the output to visualize the model’s performance and accuracy, as shown below.
- Clean-Up Resources: The test notebook will also guide you through the process of cleaning up the endpoint and the hosted model. This is an important step to manage costs and resources effectively, especially if you do not plan to use the deployed model immediately.
### Step 7: Monitoring and Management
After testing, continuous monitoring and management of your deployed model are essential.
- Monitor with Amazon CloudWatch: Regularly check the performance and health of your SageMaker endpoint using [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/).
- Manage the Endpoint: Use the SageMaker console for ongoing management of the endpoint. This includes scaling, updating, or redeploying the model as required.
By completing these steps, you will have successfully deployed and tested a YOLOv8 model on Amazon SageMaker Endpoints. This process not only equips you with practical experience in using AWS services for machine learning deployment but also lays the foundation for deploying other advanced models in the future.
## Summary
This guide took you step by step through deploying YOLOv8 on Amazon SageMaker Endpoints using AWS CloudFormation and the AWS Cloud Development Kit (CDK). The process includes cloning the necessary GitHub repository, setting up the CDK environment, deploying the model using AWS services, and testing its performance on SageMaker.
For more technical details, refer to [this article](https://aws.amazon.com/blogs/machine-learning/hosting-yolov8-pytorch-model-on-amazon-sagemaker-endpoints/) on the AWS Machine Learning Blog. You can also check out the official [Amazon SageMaker Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints.html) for more insights into various features and functionalities.
Are you interested in learning more about different YOLOv8 integrations? Visit the [Ultralytics integrations guide page](../integrations/index.md) to discover additional tools and capabilities that can enhance your machine-learning projects.