Free courses from frontend to fullstack and AI
PowerBI Data Analyst - Create visualizations and dashboards from scratch
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a comprehensive presentation on PromptEval, a novel method for estimating large language model performance across multiple prompts. Delve into the research conducted by Felipe Polo from the University of Michigan and his co-authors, which introduces an efficient approach to evaluate LLMs under practical budget constraints. Learn how PromptEval borrows strength across prompts and examples to produce accurate performance estimates. Gain insights into the methodology, implications, and potential applications of this innovative evaluation technique in the field of AI and natural language processing.
Syllabus
Efficient Multi-Prompt Evaluation Explained
Taught by
Unify