Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore cutting-edge research in natural language processing through this comprehensive colloquium featuring five expert speakers. Delve into innovative approaches for language modeling, including Sewon Min's proposal for using data at inference time to keep models up-to-date. Discover Margaret Li's efficient LLM training techniques leveraging adaptive computation and sparsity. Examine Orevaoghene Ahia's analysis of tokenization methods and their impact on model utility and costs. Investigate Shangbin Feng's exploration of political bias propagation in language models. Learn about Niloofar Mireshghallah's research on privacy risks in interactive LLM settings. Gain valuable insights into the latest advancements and challenges in NLP research across various critical areas.
Syllabus
Allen School Colloquium: NLP Research Lab
Taught by
Paul G. Allen School