Google Data Analytics, IBM AI & Meta Marketing — All in One Subscription
AI Engineer - Learn how to integrate AI into software applications
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the challenges and advancements in Natural Language Processing (NLP) evaluations in this insightful talk by Matt Gardner from the Allen Institute for Artificial Intelligence. Delve into the limitations of current NLP benchmarks and discover innovative approaches to create more meaningful and rigorous evaluation methods. Learn about the Open Reading Benchmark (ORB), which consolidates various reading comprehension datasets to target different aspects of reading comprehension. Examine the concept of contrast sets, a technique for developing non-iid test sets that more thoroughly assess a model's capabilities. Gain valuable insights into the intersection of open domain reading comprehension and question semantics understanding, and explore the importance of reasoning over open domain text in NLP research.
Syllabus
NLP Evaluations that We Believe In -- Matt Gardner (Allen Institute for Artificial Intelligence)
Taught by
Center for Language & Speech Processing(CLSP), JHU