UC San Diego Product Management Certificate — AI-Powered PM Training
MIT Sloan AI Adoption: Build a Playbook That Drives Real Business ROI
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how to optimize Large Language Model (LLM) prompts, mathematical equations, and code using TextGrad, a Python framework that implements backpropagation through text feedback. Explore five detailed examples demonstrating TextGrad's capabilities, from reducing hallucinations through logical reasoning to performing textual gradient descent on selected variables. Master the inner workings of LLM prompting concepts like losses, optimizers, and gradients while discovering how to optimize code, solve math equations, and develop optimal prompts through training data. Gain hands-on experience with practical applications of TextGrad's simple interface for implementing LLM-gradient pipelines, understanding the differences between DSPy and TextGrad, and learning advanced techniques like prompt finetuning and formatted LLM calls. Access to an LLM is required to follow along with the practical demonstrations.
Syllabus
- Intro
- What are Textual Gradients?
- DSPy vs TextGrad
- Example 1 - LLM Hallucination
- TextGrad Prompting under the hood
- Example 2 - Selected Textual Gradient Descent
- Formatted LLM Calls
- Example 3 - Optimize Code
- Example 4 - Solving Math
- Example 5 - Prompt Optimization
- Prompt Finetuning
Taught by
Neural Breakdown with AVB