Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This course provides a practical introduction to using transformer-based models for natural language processing (NLP) applications. You will learn to build and train models for text classification using encoder-based architectures like Bidirectional Encoder Representations from Transformers (BERT), and explore core concepts such as positional encoding, word embeddings, and attention mechanisms.
The course covers multi-head attention, self-attention, and causal language modeling with GPT for tasks like text generation and translation. You will gain hands-on experience implementing transformer models in PyTorch, including pretraining strategies such as masked language modeling (MLM) and next sentence prediction (NSP).
Through guided labs, you’ll apply encoder and decoder models to real-world scenarios. This course is designed for learners interested in generative AI engineering and requires prior knowledge of Python, PyTorch, and machine learning. Enroll now to build your skills in NLP with transformers!
Syllabus
- Fundamental Concepts of Transformer Architecture
- In this module, you will learn how transformers process sequential data using positional encoding and attention mechanisms. You will explore how to implement positional encoding in PyTorch and understand how attention helps models focus on relevant parts of input sequences. You'll dive deeper into self-attention and scaled dot-product attention with multiple heads to see how they contribute to language modeling tasks. The module also explains how the transformer architecture leverages these mechanisms efficiently. Through hands-on labs, you’ll implement these concepts and build transformer encoder layers in PyTorch. Finally, you'll apply transformer models for text classification, including building a data pipeline, defining the model, and training it, while also exploring techniques to optimize transformer training performance.
- Advanced Concepts of Transformer Architecture
- In this module, you will learn how decoder-based models like GPT are trained using causal language modeling and implemented in PyTorch for both training and inference. You will explore encoder-based models, such as Bidirectional Encoder Representations from Transformers (BERT), and understand their pretraining strategies using masked language modeling (MLM) and next sentence prediction (NSP), along with data preparation techniques in PyTorch. You will also examine how transformer architectures are applied to machine translation, including their implementation using PyTorch. Through hands-on labs, you will gain practical experience with decoder models, encoder models, and translation tasks. The module concludes with a cheat sheet, glossary, and summary to help consolidate your understanding of key concepts.
Taught by
Joseph Santarcangelo, Fateme Akbari, and Kang Wang