Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Introduction to Transformer Models for NLP: Unit 1

via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This course covers the development of natural language processing (NLP), starting with basic concepts and moving to modern transformer architectures. You will learn about attention mechanisms and their impact on language modeling, as well as the details of transformer models, including scaled dot product attention and multi-headed attention. The course includes practical exercises in transfer learning using pre-trained models such as BERT and GPT, with instruction on fine-tuning these models for specific NLP tasks in PyTorch. By the end, you will understand the theory behind current NLP models and gain practical experience in applying them to real-world problems.

Syllabus

  • Introduction to Transformer Models for NLP: Unit 1
    • This module explores the evolution of natural language processing (NLP) through the development and application of attention mechanisms and transformer architectures. Beginning with the history and foundational concepts of attention in language models, it delves into the transformative impact of transformers and their unique attention mechanisms. The module concludes with practical instruction on transfer learning, demonstrating how to fine-tune state-of-the-art pre-trained models like BERT and GPT using PyTorch to achieve advanced NLP results.

Taught by

Pearson and Sinan Ozdemir

Reviews

Start your review of Introduction to Transformer Models for NLP: Unit 1

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.