Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Quick Start Guide to Large Language Models (LLMs): Unit 3

via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This course explores building novel architectures tailored to unique challenges. You'll gain hands-on experience in building custom multimodal models that integrate visual and textual data, and learn to implement reinforcement learning for dynamic response refinement. Through practical case studies, you'll learn advanced fine-tuning techniques, such as mixed precision training and gradient accumulation, optimizing open-source models like BERT and GPT-2. Transitioning from theory to practice, the course also covers the complexities of deploying LLMs to the cloud, utilizing techniques like quantization and knowledge distillation for efficient, cost-effective models. By the end of this course, you'll be equipped with the skills to evaluate LLM tasks and deploy high-performing models.

Syllabus

  • Advanced LLM Usage
    • In this module, you will move beyond basic models to create new architectures tailored to specific challenges. You'll focus on multimodality, integrating different types of data to build models that interpret both text and visuals. Through a hands-on case study, you'll learn to develop a system that answers questions based on images using transformer-based encoders and decoders with cross-attention mechanisms. You'll explore reinforcement learning for large language models (LLMs), focusing on alignment. Your models will learn and refine responses based on live and modeled feedback, setting up training loops that adjust outputs in real time, demonstrated with the open-source Flan-T5 model. You'll dive into the details of open-sourced LLM fine-tuning, using techniques like mixed precision training and gradient accumulation to optimize your training loops for efficiency and precision. Real-world case studies, from multi-label classification to instruction alignment, will provide insights into training LLMs. As you wrap up this module, you'll tackle deployment and evaluation. You'll address the challenges of moving LLMs to the cloud, focusing on optimization through techniques like quantization, pruning, and knowledge distillation. You'll learn to deploy cost-effective models without sacrificing performance. You'll also evaluate LLM tasks, breaking them down into four main categories and providing guidelines for each. Additionally, you'll explore how LLMs structure knowledge within their parameters and extract insights through simple probing mechanisms. By the end of this lesson, you'll have the tools to evaluate LLMs and their ability to solve specific tasks on certain datasets.

Taught by

Pearson and Sinan Ozdemir

Reviews

Start your review of Quick Start Guide to Large Language Models (LLMs): Unit 3

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.