Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This talk by Danqi Chen from Princeton University explores the boundary between retrieval-augmented language models and long-context language models, examining how these approaches complement and compete with each other in handling extensive information. Learn about the latest research developments in both paradigms and their implications for the future of language models and transformer architectures. The presentation is part of the Simons Institute's series on the future of language models and transformers.
Syllabus
The Frontier between Retrieval-augmented and Long-context Language Models
Taught by
Simons Institute