Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

freeCodeCamp

Code 7 Landmark NLP Papers in PyTorch - Full Neural Machine Translation Course

via freeCodeCamp

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Embark on a comprehensive 7-hour journey through the evolution of sequence models and neural machine translation by implementing 7 landmark NLP papers in PyTorch. Trace the historical development from RNNs and LSTMs to modern Transformers while gaining hands-on experience replicating groundbreaking research papers including Cho et al. (2014), Sutskever et al. (2014), Bahdanau et al. (2015), Jean et al. (2015), Luong et al. (2015), Wu et al. (2016), and Johnson et al. (2017). Master the mathematical foundations behind RNNs, LSTMs, GRUs, and attention mechanisms through detailed explanations and visual demonstrations. Build practical coding skills by implementing each paper's architecture from scratch in PyTorch, following along with step-by-step labs that recreate these pivotal moments in NLP history. Explore the evolution of machine translation techniques from statistical methods to neural approaches, understand the architectural innovations that led to modern NMT systems, and compare different sequence modeling approaches including RNNs, LSTMs, GRUs, and Transformers. Gain insights into Google's Neural Machine Translation system (GNMT) and multilingual NMT approaches while working with interactive tools like the Transformer Playground to visualize complex architectures and concepts.

Syllabus

– 0:01:06 Welcome
– 0:04:27 Intro to Atlas
– 0:09:25 Evolution of RNN
– 0:15:08 Evolution of Machine Translation
– 0:26:56 Machine Translation Techniques
– 0:34:28 Long Short-Term Memory Overview
– 0:52:36 Learning Phrase Representation using RNN Encoder–Decoder for SMT
– 1:00:46 Learning Phrase Representation PyTorch Lab – Replicating Cho et al., 2014
– 1:23:45 Seq2Seq Learning with Neural Networks
– 1:45:06 Seq2Seq PyTorch Lab – Replicating Sutskever et al., 2014
– 2:01:45 NMT by Jointly Learning to Align Bahdanau et al., 2015
– 2:32:36 NMT by Jointly Learning to Align & Translate PyTorch Lab – Replicating Bahdanau et al., 2015
– 2:42:45 On Using Very Large Target Vocabulary
– 3:03:45 Large Vocabulary NMT PyTorch Lab – Replicating Jean et al., 2015
– 3:24:56 Effective Approaches to Attention Luong et al., 2015
– 3:44:06 Attention Approaches PyTorch Lab – Replicating Luong et al., 2015
– 4:03:17 Long Short-Term Memory Network Deep Explanation
– 4:28:13 Attention Is All You Need Vaswani et al., 2017
– 4:47:46 Google Neural Machine Translation System GNMT – Wu et al., 2016
– 5:12:38 GNMT PyTorch Lab – Replicating Wu et al., 2016
– 5:29:46 Google’s Multilingual NMT Johnson et al., 2017
– 6:00:46 Multilingual NMT PyTorch Lab – Replicating Johnson et al., 2017
– 6:15:49 Transformer vs GPT vs BERT Architectures
– 6:36:38 Transformer Playground Tool Demo
– 6:38:31 Seq2Seq Idea from Google Translate Tool
– 6:49:31 RNN, LSTM, GRU Architectures Comparisons
– 7:01:08 LSTM & GRU Equations

Taught by

freeCodeCamp.org

Reviews

Start your review of Code 7 Landmark NLP Papers in PyTorch - Full Neural Machine Translation Course

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.