Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to prompt Large Language Models (LLMs) to generate secure code by default through structured evaluation and prompt hardening techniques. Explore the results of a comprehensive study using PromptFoo to evaluate secure code generation, examining how LLMs often produce code with vulnerabilities included. Discover practical strategies for automating security into AI-generated code without writing lengthy prompts each time. Gain insights into the significance of priming in prompt engineering and understand how to implement security-first approaches in your AI-assisted development workflow. Master a minimal, repeatable prompt pattern that enhances code security, learn an evaluation recipe you can implement immediately, and develop strategies for integrating security by default into your own AI-generated code projects.
Syllabus
Prompt Hardening - Secure Code Generation Using AI - Sean Sinclair - NDC Manchester 2025
Taught by
NDC Conferences