- Learn how to plan and prepare a GenAIOps solution.
By the end of this module, you're able to:
- Identify use cases for Generative AI applications.
- Select a model for your Generative AI application.
- Describe what GenAIOps is, and how it defines the app lifecycle.
- Learn how to manage prompts for agents in Microsoft Foundry using GitHub version control and collaboration features.
By the end of this module, you're able to:
- Apply version control principles to manage prompts as code assets.
- Understand how prompts integrate with Microsoft Foundry agents and versioning strategies.
- Design a GitHub repository structure for prompt versioning and collaboration.
- Develop a workflow for testing and deploying prompts safely.
- Learn how to evaluate and optimize AI agents systematically through structured experiments that measure quality, cost, and performance. Design evaluation metrics, apply Git-based workflows, create consistent scoring rubrics, and make evidence-based optimization decisions.
In this module, you:
- Design evaluation experiments with clear metrics for quality, cost, and performance
- Apply Git-based workflows to organize and compare agent variants systematically
- Create evaluation rubrics that ensure consistent scoring across human evaluators
- Compare experiment results to make evidence-based optimization decisions
- Learn how to implement automated evaluations for AI agents using Microsoft Foundry evaluators and GitHub Actions workflows.
By the end of this module, you're able to:
- Explain why automated evaluations complement human evaluations in AI quality assurance.
- Select evaluators that align with human evaluation criteria for validation.
- Create evaluation datasets with appropriate composition for comprehensive testing.
- Implement batch evaluations using Python scripts with Microsoft Foundry.
- Integrate automated evaluation workflows into GitHub Actions for continuous testing.
- Learn how to monitor the performance of your generative AI application using Microsoft Foundry. This module teaches you to track key metrics like latency and token usage to make informed, cost-effective deployment decisions.
By the end of this module, you'll be able to:
- Understand why monitoring is essential when moving Gen AI apps toward production readiness.
- Identify and interpret key performance metrics: latency, throughput, token usage, and error rates.
- Use Azure Monitor together with Microsoft Foundry to observe and analyze app behavior.
- Apply insights to optimize performance, cost, and user experience in Gen AI solutions.
- Learn how to implement tracing in your generative AI applications using Microsoft Foundry and OpenTelemetry. This module teaches you to capture detailed execution flows, debug complex workflows, and understand application behavior for better reliability and optimization.
By the end of this module, you'll be able to:
- Set up tracing infrastructure with Microsoft Foundry and Application Insights.
- Implement custom spans for AI model calls and business logic operations.
- Analyze trace data to identify performance bottlenecks and failure patterns.
AI Engineer - Learn how to integrate AI into software applications
35% Off Finance Skills That Get You Hired - Code CFI35
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Syllabus
- Plan and prepare a GenAIOps solution
- Introduction
- Explore use cases for GenAIOps
- Select the right generative AI model
- Understand the development lifecycle of a language model application
- Explore available tools and frameworks to implement GenAIOps
- Exercise - Compare language models from the model catalog
- Module assessment
- Summary
- Manage prompts for agents in Microsoft Foundry with GitHub
- Introduction
- Apply version control to prompts
- Understand Microsoft Foundry agents and prompt versioning
- Organize prompts in GitHub repositories
- Develop safe prompt deployment workflows
- Exercise - Develop prompt and agent versions
- Knowledge check
- Summary
- Evaluate and optimize AI agents through structured experiments
- Introduction
- Design evaluation experiments
- Apply Git-based workflows to optimization experiments
- Apply evaluation rubrics for consistent scoring
- Exercise - Evaluate and compare AI agent versions
- Knowledge check
- Summary
- Automate AI evaluations with Microsoft Foundry and GitHub Actions
- Introduction
- Understand why automated evaluations matter
- Align evaluators with human criteria
- Create evaluation datasets
- Implement batch evaluations with Python
- Integrate evaluations into GitHub Actions
- Exercise - Set up automated evaluations
- Knowledge check
- Summary
- Monitor your generative AI application
- Introduction
- Why do you need to monitor?
- Understand key metrics to monitor
- Explore how to monitor with Azure
- Integrate monitoring into your app
- Interpret monitoring results
- Exercise - Enable monitoring for a generative AI application
- Knowledge check
- Summary
- Analyze and debug your generative AI app with tracing
- Introduction
- Why do you need to use tracing?
- Identify what to trace in generative AI applications
- Implement tracing in generative AI applications
- Debug complex workflows with advanced tracing patterns
- Make informed decisions with trace data analysis
- Exercise - Enable tracing for a generative AI application
- Knowledge check
- Summary