Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Data-distributional Approaches for Generalizable Language Models

Center for Language & Speech Processing(CLSP), JHU via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore data-distributional approaches for creating more generalizable language models in this comprehensive lecture by Stanford PhD student Sang Michael Xie. Discover principled methods to improve and understand language models by focusing on pre-training data distribution. Learn about optimizing data source mixtures for efficient multipurpose language model training, employing importance resampling to select relevant data from large-scale web datasets for specialized model training, and gain insights into the theoretical analysis of in-context learning. Understand how these approaches can enhance the capabilities and training efficiency of large language models, and how they relate to modeling coherence structure in pre-training data.

Syllabus

Data-distributional Approaches for Generalizable Language Models -- Sang Michael Xie (Stanford)

Taught by

Center for Language & Speech Processing(CLSP), JHU

Reviews

Start your review of Data-distributional Approaches for Generalizable Language Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.