Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

MIT OpenCourseWare

Language Models - Prompting, Chain-of-Thought, and Instruction-Tuning - Lecture 21

MIT OpenCourseWare via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore advanced language model techniques in this comprehensive lecture from MIT's Deep Learning course, delivered by guest instructor Jacob Andreas. Delve into the fundamental concepts of prompting strategies, chain-of-thought reasoning, and instruction-tuning methodologies that form the backbone of modern language model applications. Master the principles of in-context learning with large language models, understanding how these systems can adapt and perform tasks without explicit parameter updates. Examine practical implementations of prompting techniques that enable language models to solve complex problems through structured reasoning processes. Learn how chain-of-thought approaches break down multi-step problems into manageable components, enhancing model performance on logical and mathematical tasks. Discover instruction-tuning methods that align language models with human preferences and specific task requirements. Gain insights into the theoretical foundations and practical applications of these cutting-edge techniques that are revolutionizing natural language processing and artificial intelligence applications across various domains.

Syllabus

Lec 21. Language Models

Taught by

MIT OpenCourseWare

Reviews

Start your review of Language Models - Prompting, Chain-of-Thought, and Instruction-Tuning - Lecture 21

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.