Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Amazon Web Services

Customizing and Evaluating LLMs Using Amazon SageMaker JumpStart

Amazon Web Services and Amazon via AWS Skill Builder

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it

In this course, you learn about customizing and evaluating large language models (LLMs) using Amazon SageMaker JumpStart. Amazon SageMaker JumpStart is a machine learning (ML) hub with foundation models, built-in algorithms, and prebuilt ML solutions that you can deploy with a few clicks. You will learn the alternatives to fine-tuning including the foundations of prompt engineering and retrieval augmented generation (RAG). You will also learn to fine-tune, deploy and evaluate fine-tuned models available on SageMaker JumpStart.


Using your own AWS account and the notebooks provided, you can practice building RAG applications using the Amazon SageMaker-LangChain integration. You can also fine-tune a Llama3 model and evaluate it using evaluation metrics. You can practice one of the aspects of responsible AI with the help of a notebook that addresses prompt stereotyping. Alternatively, you can watch a video demonstration of running the notebooks.

  • Course level: Advanced
  • Duration: 4 hours

Activities

This course includes presentations, demonstrations, and assessments.


Course objectives

In this course, you will do the following:

  • Describe the different techniques to customize LLMs.
  • Describe when to use prompt engineering and retrieval augmented generation as customization options.
  • Demonstrate the use of Amazon SageMaker-LangChain integration to build a RAG application using a Falcon model.
  • Describe the use of domain adaptation and instruction fine-tuning.
  • Demonstrate how to fine-tune and deploy a model from the SageMaker JumpStart ML hub.
  • Demonstrate the use of the SageMaker Python SDK to fine-tune LLMs using Parameter Efficient Fine-Tuning (PEFT).
  • Evaluate foundation models by using the SageMaker JumpStart console and fmeval library.

Intended Audience:

This course is intended for the following job roles:

  • Data scientists
  • Machine learning engineers

Prerequisites

We recommend that attendees of this course have the following:

  • More than 1 year of experience with natural language processing (NLP)
  • More than 1 year of experience with training and tuning language models
  • Intermediate-level proficiency in Python language programming
  • AWS Technical Essentials.
  • Amazon SageMaker JumpStart Foundations.

Course outline

  • Module 1: Introduction to Customizing LLMs
    • Customizing LLMs
    • Choosing customization methods
  • Module 2: Prompt Engineering and RAG for Customizing LLMs
    • Using prompt engineering
    • Using Retrieval Augmented Generation (RAG)
    • Using advanced RAG patterns.
  • Demo 1: Create a RAG application using Amazon SageMaker-LangChain integration and a Falcon 7B model from SageMaker JumpStart
  • Module 3: Fine-tuning and Deploying Foundation Models
    • Customize foundation models using fine tuning
    • How to use SageMaker JumpStart console to fine-tune and deploy an LLM
  • Demo 2: Fine-tune a Llama 3 model available on SageMaker JumpStart using Amazon SageMaker Python SDK
  • Module 4: Evaluating Foundation Models
    • Discuss model evaluation metrics
    • Evaluate foundations models using Amazon SageMaker JumpStart console
  • Demo 3: Evaluate prompt stereotyping of a Falcon-7B model using the fmeval library
  • Module 5: Resources
    • Learn More
    • Contact Us

Reviews

Start your review of Customizing and Evaluating LLMs Using Amazon SageMaker JumpStart

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.