AI Adoption - Drive Business Value and Organizational Impact
35% Off Finance Skills That Get You Hired - Code CFI35
Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore the challenges and advancements in Natural Language Processing (NLP) evaluations in this insightful talk by Matt Gardner from the Allen Institute for Artificial Intelligence. Delve into the limitations of current NLP benchmarks and discover innovative approaches to create more meaningful and rigorous evaluation methods. Learn about the Open Reading Benchmark (ORB), which consolidates various reading comprehension datasets to target different aspects of reading comprehension. Examine the concept of contrast sets, a technique for developing non-iid test sets that more thoroughly assess a model's capabilities. Gain valuable insights into the intersection of open domain reading comprehension and question semantics understanding, and explore the importance of reasoning over open domain text in NLP research.
Syllabus
NLP Evaluations that We Believe In -- Matt Gardner (Allen Institute for Artificial Intelligence)
Taught by
Center for Language & Speech Processing(CLSP), JHU