- Learn how to deploy models to a managed online endpoint for real-time inferencing.
In this module, you'll learn how to:
- Use managed online endpoints.
- Deploy your MLflow model to a managed online endpoint.
- Deploy a custom model to a managed online endpoint.
- Test online endpoints.
- Learn how to deploy models to a batch endpoint. When you invoke a batch endpoint, you trigger a batch scoring job.
In this module, you learn how to:
- Create a batch endpoint.
- Deploy your MLflow model to a batch endpoint.
- Deploy a custom model to a batch endpoint.
- Invoke batch endpoints.
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Syllabus
- Deploy a model to a managed online endpoint
- Introduction
- Explore managed online endpoints
- Deploy your MLflow model to a managed online endpoint
- Deploy a model to a managed online endpoint
- Test managed online endpoints
- Exercise - Deploy an MLflow model to an online endpoint
- Module assessment
- Summary
- Deploy a model to a batch endpoint
- Introduction
- Understand and create batch endpoints
- Deploy your MLflow model to a batch endpoint
- Deploy a custom model to a batch endpoint
- Invoke and troubleshoot batch endpoints
- Exercise - Deploy an MLflow model to a batch endpoint
- Module assessment
- Summary