Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This course provides practical instruction on transformer architectures such as BERT, GPT, and T5. You will learn about attention mechanisms, transfer learning, and model fine-tuning through coding exercises and case studies. By the end, you will be able to build and optimize NLP models for various applications.
Syllabus
- Course 1: Introduction to Transformer Models for NLP: Unit 1
- Course 2: Introduction to Transformer Models for NLP: Unit 2
- Course 3: Introduction to Transformer Models for NLP: Unit 3
Courses
-
This course covers the development of natural language processing (NLP), starting with basic concepts and moving to modern transformer architectures. You will learn about attention mechanisms and their impact on language modeling, as well as the details of transformer models, including scaled dot product attention and multi-headed attention. The course includes practical exercises in transfer learning using pre-trained models such as BERT and GPT, with instruction on fine-tuning these models for specific NLP tasks in PyTorch. By the end, you will understand the theory behind current NLP models and gain practical experience in applying them to real-world problems.
-
This course covers the fundamentals and advanced applications of BERT and GPT models. You will learn how BERT processes text, including tokenization and vectorization, and practice fine-tuning BERT for tasks such as sequence classification, token classification, and question answering. The course also explains how GPT generates text, adapts to different writing styles, and can be fine-tuned for tasks like translating English to code. Additional topics include semantic search using Siamese BERT and multi-task learning with GPT through prompt engineering. By the end of the course, you will have the practical skills and theoretical understanding needed to apply BERT and GPT to various natural language processing problems.
-
This course covers transformer models and their applications in natural language processing and computer vision. Topics include the T5 model, fine-tuning for tasks such as abstractive summarization, and the Vision Transformer. Students will learn to build an image captioning system by combining vision and language models. The course also provides practical instruction on deploying models, including MLOps practices, sharing models on HuggingFace, and cloud deployment with FastAPI. By the end of the course, students will have the knowledge and skills to implement, fine-tune, and deploy transformer models for various real-world tasks.
Taught by
Pearson and Sinan Ozdemir