Power BI Fundamentals - Create visualizations and dashboards from scratch
Our career paths help you become job ready faster
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to build a custom AI Security Benchmarking tool that evaluates the security of code generated by various large language models including Gemini, Mistral, and GLM 4.5. Set up an automated pipeline using Windsurf, OpenRouter, and Snyk that prompts multiple LLMs to write applications and immediately scans the output for security vulnerabilities. Configure your development environment with proper API keys, design effective security prompts, and create an automated build system using AI agents. Explore a comprehensive benchmarking dashboard to compare different models' security performance, analyze detailed Snyk security reports, and understand the implications for trusting AI-generated code in production environments.
Syllabus
The Big Question: Is AI code secure?
Identifying vulnerabilities
Setting up the stack: OpenRouter & Snyk API keys
Configuring your IDE Windsurf & Cursor
Designing the master security prompt
Automating the build with AI agents
Exploring the benchmarking dashboard
Testing different LLMs GLM 4.5 & Trinity
Analyzing the Snyk security report
Final Verdict: Can you trust AI-generated code?
Taught by
Snyk