- Evaluating copilots is essential to ensure your generative AI applications meet user needs, provide accurate responses, and continuously improve over time. Discover how to assess and optimize the performance of your generative AI applications using the tools and features available in the Azure AI Studio.
By the end of this module, you'll be able to:
- Understand model benchmarks.
- Perform manual evaluations.
- Assess your generative AI apps with AI-assisted metrics.
- Configure evaluation flows in the Microsoft Foundry portal.
- Get an introduction to what you need to know about monitoring Azure Machine Learning deployments.
After you complete this module, you'll be able to:
- Set up monitoring for Azure Machine Learning resources and workflows.
- Manage metrics and logs for Azure Machine Learning resources.
- Describe how monitoring for Azure Machine Learning models works.
Get Coursera Plus for 40% off
Power BI Fundamentals - Create visualizations and dashboards from scratch
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Syllabus
- Evaluate generative AI performance in Microsoft Foundry portal
- Introduction
- Assess the model performance
- Manually evaluate the performance of a model
- Automated evaluations
- Exercise - Evaluate generative AI model performance
- Module assessment
- Summary
- Introduction to Azure Machine Learning monitoring
- Introduction
- Monitoring Azure Machine Learning workspaces and compute
- Azure Monitor platform metrics
- Azure Monitor resource logs
- Azure Monitor and alerts
- Online endpoints
- Azure Machine Learning model monitoring
- Knowledge check
- Monitoring signals and metrics
- Summary