How We Hacked YC Spring 2025 Batch's AI Agents - Security Vulnerabilities and Mitigation Strategies
AI Engineer via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about critical security vulnerabilities in AI agents through a detailed analysis of penetration testing conducted on Y Combinator's Spring 2025 batch companies. Discover how security researchers successfully compromised 7 out of 16 publicly-accessible AI agents within 30 minutes each, demonstrating serious flaws that allowed data leaks, remote code execution, and database takeovers. Explore the evolution of agent technology stacks and understand why these security concerns have become increasingly prevalent as AI agents gain more capabilities and access to sensitive systems. Examine three major vulnerability categories through real-world examples: Cross-User Data Access (IDOR) vulnerabilities that expose user information across accounts, Arbitrary Code Execution flaws that allow attackers to run malicious code on company servers, and Server-Side Request Forgery (SSRF) attacks that can compromise internal systems and databases. Gain practical insights into common implementation mistakes that leave AI agents vulnerable, understand the specific attack vectors used to exploit these weaknesses, and learn essential mitigation strategies to protect your AI systems before they put your business at risk. Access actionable security recommendations and best practices for building more secure AI agent architectures that can withstand sophisticated attacks while maintaining functionality and user experience.
Syllabus
00:00 Introduction to Casco and AI Agents
01:31 Evolution of Agent Stacks and Security Concerns
02:56 Why Casco Hacked AI Agents
04:00 Common Issue 1: Cross-User Data Access IDOR
07:38 Common Issue 2: Arbitrary Code Execution
12:38 Common Issue 3: Server-Side Request Forgery SSRF
14:48 Key Takeaways
15:28 Casco's Solution and Contact Information
15:56 Q&A
Taught by
AI Engineer