Completed
Here is how Transformers ended the tradition of Inductive Bias in Neural Nets
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Attention to Transformers from Zero to Hero - Theory and Hands-on Projects
Automatically move to the next video in the Classroom when playback concludes
- 1 Neural Attention - This simple example will change how you think about it
- 2 The many amazing things about Self-Attention and why they work
- 3 Here is how Transformers ended the tradition of Inductive Bias in Neural Nets
- 4 10 years of NLP history explained in 50 concepts | From Word2Vec, RNNs to GPT
- 5 From Attention to Generative Language Models - One line of code at a time!
- 6 Turns out Attention wasn't all we needed - How have modern Transformer architectures evolved?
- 7 Finetune LLMs to teach them ANYTHING with Huggingface and Pytorch | Step-by-step tutorial
- 8 Vision Transformers - The big picture of how and why it works so well.
- 9 Sparse Mixture of Experts - The transformer behind the most efficient LLMs (DeepSeek, Mixtral)
- 10 Building awesome Speech To Text Transformers from scratch - One line of Pytorch at a time!