Free courses from frontend to fullstack and AI
Python, Prompt Engineering, Data Science — Build the Skills Employers Want Now
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn about language modeling fundamentals in this comprehensive lecture covering key concepts from padding techniques to advanced neural language models. Begin with a review of assignments before diving into padding methodologies and exploring the limitations of static embeddings, random initialization, and bag-of-words approaches. Trace the evolution of transformer models through to RLHF (Reinforcement Learning from Human Feedback), followed by an in-depth examination of N-gram language models. Conclude with an exploration of neural language models and their applications in modern natural language processing.
Syllabus
Recap / Assignments
Padding
Limitations of static embeddings & random init & BoW
Transformers to RLHF timeline
N-gram LMs
Neural LMs
Taught by
UofU Data Science