Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Mitigating LLM Hallucination Risk Through Research-Backed Metrics

Databricks via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a 42-minute conference talk on mitigating Large Language Model (LLM) hallucination risks using research-backed metrics. Delve into ChainPoll, a methodology for evaluating LLM output quality, particularly in Retrieval-Augmented Generation (RAG) and fine-tuning scenarios. Learn about metrics that correlate highly with human feedback while remaining cost-effective and low-latency. Gain insights into evaluating input quality, including data and RAG context, as well as output quality focusing on hallucinations. Discover an evaluation and experimentation framework for prompt engineering with RAG and fine-tuning using custom data. Watch a practical, demo-led guide to implementing guardrails and reducing hallucinations in LLM-powered applications. Presented by Vikram Chatterji, CEO and Co-founder of Galileo Technologies Inc, this talk offers valuable knowledge for developers and researchers working with LLMs.

Syllabus

Mitigating LLM Hallucination Risk Through Research Backed Metrics

Taught by

Databricks

Reviews

Start your review of Mitigating LLM Hallucination Risk Through Research-Backed Metrics

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.