Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore advanced generative AI workflows in this 45-minute conference talk that examines the strategic trade-offs between designing multi-step agentic AI solutions and fine-tuning large language models for complex tasks. Learn to orchestrate multi-agent AI workflows for real-world applications while mastering effective LLM fine-tuning techniques using Supervised Fine-Tuning (SFT) and VLLM deployment methods. Discover how to evaluate critical factors that influence LLM output quality, including data curation strategies and model architecture considerations. Watch practical demonstrations of SFT applications and complete LLM deployment pipelines, then understand the next steps for integration with Model Context Protocol (MCP) to enhance your AI workflow capabilities.
Syllabus
Agentic Workflows vs. LLM Fine-Tuning | Vanessa Lopes
Taught by
MLCon | Machine Learning Conference