Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Introduction to Transformer Models for NLP: Unit 2

via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This course covers the fundamentals and advanced applications of BERT and GPT models. You will learn how BERT processes text, including tokenization and vectorization, and practice fine-tuning BERT for tasks such as sequence classification, token classification, and question answering. The course also explains how GPT generates text, adapts to different writing styles, and can be fine-tuned for tasks like translating English to code. Additional topics include semantic search using Siamese BERT and multi-task learning with GPT through prompt engineering. By the end of the course, you will have the practical skills and theoretical understanding needed to apply BERT and GPT to various natural language processing problems.

Syllabus

  • Introduction to Transformer Models for NLP: Unit 2
    • This module provides a comprehensive exploration of modern transformer-based models for natural language processing. It covers the foundational architectures and mechanisms of BERT and GPT, delving into their pre-training, fine-tuning, and practical applications. Through hands-on lessons, learners engage with real-world tasks such as sequence and token classification, question answering, semantic search, and text generation. The module emphasizes both theoretical understanding and practical skills, enabling students to leverage BERT and GPT for a wide range of NLP challenges, including multi-task learning and adapting models to new domains or writing styles.

Taught by

Pearson and Sinan Ozdemir

Reviews

Start your review of Introduction to Transformer Models for NLP: Unit 2

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.