- Evaluating copilots is essential to ensure your generative AI applications meet user needs, provide accurate responses, and continuously improve over time. Discover how to assess and optimize the performance of your generative AI applications using the tools and features available in the Azure AI Studio.
By the end of this module, you'll be able to:
- Understand model benchmarks.
- Perform manual evaluations.
- Assess your generative AI apps with AI-assisted metrics.
- Configure evaluation flows in the Microsoft Foundry portal.
- Get an introduction to what you need to know about monitoring Azure Machine Learning deployments.
After you complete this module, you'll be able to:
- Set up monitoring for Azure Machine Learning resources and workflows.
- Manage metrics and logs for Azure Machine Learning resources.
- Describe how monitoring for Azure Machine Learning models works.
AI, Data Science & Business Certificates from Google, IBM & Microsoft
UC San Diego Product Management Certificate — AI-Powered PM Training
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Syllabus
- Evaluate generative AI performance in Microsoft Foundry portal
- Introduction
- Assess the model performance
- Manually evaluate the performance of a model
- Automated evaluations
- Exercise - Evaluate generative AI model performance
- Module assessment
- Summary
- Introduction to Azure Machine Learning monitoring
- Introduction
- Monitoring Azure Machine Learning workspaces and compute
- Azure Monitor platform metrics
- Azure Monitor resource logs
- Azure Monitor and alerts
- Online endpoints
- Azure Machine Learning model monitoring
- Knowledge check
- Monitoring signals and metrics
- Summary