Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Can You Trust AI Code - I Built a Scanner to Find Out

Snyk via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to build a custom AI Security Benchmarking tool that evaluates the security of code generated by various large language models including Gemini, Mistral, and GLM 4.5. Set up an automated pipeline using Windsurf, OpenRouter, and Snyk that prompts multiple LLMs to write applications and immediately scans the output for security vulnerabilities. Configure your development environment with proper API keys, design effective security prompts, and create an automated build system using AI agents. Explore a comprehensive benchmarking dashboard to compare different models' security performance, analyze detailed Snyk security reports, and understand the implications for trusting AI-generated code in production environments.

Syllabus

The Big Question: Is AI code secure?
Identifying vulnerabilities
Setting up the stack: OpenRouter & Snyk API keys
Configuring your IDE Windsurf & Cursor
Designing the master security prompt
Automating the build with AI agents
Exploring the benchmarking dashboard
Testing different LLMs GLM 4.5 & Trinity
Analyzing the Snyk security report
Final Verdict: Can you trust AI-generated code?

Taught by

Snyk

Reviews

Start your review of Can You Trust AI Code - I Built a Scanner to Find Out

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.