Step over to the dark side and learn about the vulnerabilities, exploits, and unintended consequences that AI models like LLMs suffer from, with hands-on prompting and exercises.
- What jailbreaking models involves and how to do it yourself
- Understanding vulnerabilities inherent to models, including prompt and data leakage
- The risks of exposing LLMs to proprietary or sensitive data
- Exploring the toxicity and bias inherently built into different models
- Real-world tests using ChatGPT, DeepSeek and other models
- Experiment with steering an LLM's neurons to prevent hallucinations