Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
The demand for technical generative AI (GenAI) skills is increasing, and businesses are actively seeking AI engineers who can work with large language models (LLMs). This IBM course is designed to build job-ready skills that can accelerate your AI career.
In this course, you’ll explore transformers and key model frameworks and platforms, including Hugging Face and PyTorch. You’ll begin with a foundational framework for optimizing LLMs and quickly advance to fine-tuning generative AI models. You’ll also learn advanced techniques such as parameter-efficient fine-tuning (PEFT), low-rank adaptation (LoRA), quantized LoRA (QLoRA), and prompting.
The hands-on labs will give you valuable, practical experience including loading, pretraining, and fine-tuning models using industry-standard tools. These skills are directly applicable in real-world AI roles and are great for showcasing in interviews.
If you’re ready to take your AI career to the next level and strengthen your resume with in-demand Gen AI competencies, enroll today and start applying your new skills in just one week!
Syllabus
- Transformers and Fine-Tuning
- In this module, you will delve into the practical aspects of working with large language models (LLMs) using industry-standard tools like Hugging Face and PyTorch. You’ll explore the distinctions between these frameworks, learn how to load and perform inference with pretrained models, and understand the processes of pretraining and fine-tuning LLMs. Through hands-on labs, you’ll gain experience in implementing these techniques, enhancing your ability to develop and optimize generative AI models for various applications. By the end of this module, you’ll be equipped with the skills to effectively utilize and fine-tune LLMs, aligning them with specific tasks and performance requirements.
- Parameter Efficient Fine-Tuning (PEFT)
- In this module, you will explore cutting-edge methods for fine-tuning large language models using parameter-efficient fine-tuning (PEFT) techniques. You’ll gain an understanding of adapters, low-rank adaptation (LoRA), and quantization, along with practical applications of PyTorch and Hugging Face libraries. The hands-on labs and readings will deepen your knowledge of soft prompts, quantized LoRA (QLoRA), and key terminology. You will also have access to a concise cheat sheet and a glossary that reinforce essential techniques, terms, and tools introduced throughout the course.
Taught by
Joseph Santarcangelo, Ashutosh Sagar, and Fateme Akbari