35% Off Finance Skills That Get You Hired - Code CFI35
Start speaking a new language. It’s just 3 weeks away.
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore Azure AI Foundry's Red Teaming Agent in this 20-minute conference talk that demonstrates how to proactively identify vulnerabilities in autonomous AI agents before they impact real-world scenarios. Learn about the Azure AI Evaluation SDK's cutting-edge Red Teaming Agent tool designed to rigorously challenge AI agents by simulating adversarial scenarios and stress-testing agentic decision-making processes. Discover practical techniques for systematically identifying weaknesses in AI systems, interpreting evaluation results, and integrating comprehensive safety checks into your development lifecycle. Understand how adversarial testing methodologies can expose hidden risks and unexpected behaviors while ensuring your AI applications remain robust, ethical, and safe. Gain insights into advanced AI evaluation methodologies that help build trust in AI solutions and maintain competitive advantage in the rapidly evolving landscape of responsible AI development. The session is presented by Nagkumar Arkalgud, Senior Software Engineer at Microsoft with 10 years of experience who designed and built the Azure AI Evaluation SDK, and Keiji Kanazawa, a Product Lead with over 20 years of technical expertise in building web-scale services and API platforms for Microsoft's machine learning and artificial intelligence platform.
Syllabus
AI Red Teaming Agent: Azure AI Foundry — Nagkumar Arkalgud & Keiji Kanazawa, Microsoft
Taught by
AI Engineer