Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the complexities of implementing Large Language Models (LLMs) in product development through this insightful podcast episode featuring Phillip Carter. Gain valuable insights into security challenges, collaborative defense strategies against attacks, and the crucial roles of ML engineers and product managers in successful LLM implementation. Learn about identifying leading indicators and measuring ROI for impactful AI initiatives. Discover Phillip's expertise in developer tooling, OpenTelemetry, and prompt engineering at Honeycomb. Delve into topics such as querying natural language, function calls, error pattern analysis, prompt injection cycles, and the often undervalued importance of user interface in AI features. Understand the cost considerations and ROI of AI implementations, and explore the balance between ML and product perspectives in AI model trade-offs. Gain practical knowledge on improving LLMs in production through observability and iterative processes.
Syllabus
[] Phillip's preferred coffee
[] Takeaways
[] Please like, share, and subscribe to our MLOps channels!
[] Phillip's background
[] Querying Natural Language
[] Function calls
[] Pasting errors or traces
[] Error patterns
[] Honeycomb's Improvement cycle
[] Prompt boxes rationale
[] Prompt injection cycles
[] Injection Attempt
[] UI undervalued, charging the AI feature
[] ROI cost
[] Bridging ML and Product Perspective
[] AI Model Trade-offs
[] Query assistant
[] Honeycomb is hiring!
[] Wrap up
Taught by
MLOps.community