Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This lecture by Kilian Weinberger from Cornell University explores innovative approaches to applying Denoising Diffusion Models (DDMs) to text generation. Learn how diffusion models, which have revolutionized image synthesis with their quality and controllability, can be adapted to the discrete domain of language generation. Discover two key approaches: Latent Diffusion for Language Generation, which enables DDMs to work in the latent space of text auto-encoders to produce fluent text, and a hybrid method that uses diffusion models to generate semantic proposals guiding autoregressive text decoders. Understand how these techniques combine the fluency of autoregression with the plug-and-play control capabilities of diffusion, potentially opening new possibilities for more flexible and controllable language generation systems beyond the limitations of current transformer-based language models.
Syllabus
Advancing Diffusion Models for Text Generation
Taught by
Simons Institute