Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

CodeSignal

Controlling and Securing OpenAI Agents Execution in Python

via CodeSignal

Overview

Keep your agents secure, private, and reliable. This course covers safely handling sensitive data, using hooks to monitor and customize agent workflows, and applying guardrails to validate and filter all agent inputs and outputs.

Syllabus

  • Unit 1: Securely Injecting Sensitive Data into Agents
    • Securing Sensitive Data with Context Wrappers
    • Injecting Context into Agent Runtime
    • Examining the Secure Conversation Flow
    • Context Sharing Across Agent Handoffs
    • Multiple Tools Sharing Secure Context
  • Unit 2: Tapping into Agent Workflows with RunHooks & AgentHooks
    • Building Your First Agent Monitor
    • Comprehensive Agent Workflow Monitoring System
    • Agent Specific Monitoring with AgentHooks
    • Dynamic Context Injection with AgentHooks
    • Refactoring Hooks for Better Control
  • Unit 3: Protecting Agents with Input Guardrails
    • Working With Guardrail Outputs
    • Integrating Input Guardrails into Agents
    • Building an LLM Content Analyzer
    • Upgrading to Intelligent Content Validation
    • Layered Defense with Multiple Guardrails
  • Unit 4: Securing Agent Responses with Output Guardrails
    • Converting Input Guards to Output Guards
    • Converting LLM-Based Input Guards to Output Guards
    • Redacting Outputs for Information Leak Prevention
    • Securing Both Ends With Input and Output Guardrails

Reviews

Start your review of Controlling and Securing OpenAI Agents Execution in Python

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.