Attention to Transformers from Zero to Hero - Theory and Hands-on Projects

Attention to Transformers from Zero to Hero - Theory and Hands-on Projects

Neural Breakdown with AVB via YouTube Direct link

Here is how Transformers ended the tradition of Inductive Bias in Neural Nets

3 of 10

3 of 10

Here is how Transformers ended the tradition of Inductive Bias in Neural Nets

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Attention to Transformers from Zero to Hero - Theory and Hands-on Projects

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Neural Attention - This simple example will change how you think about it
  2. 2 The many amazing things about Self-Attention and why they work
  3. 3 Here is how Transformers ended the tradition of Inductive Bias in Neural Nets
  4. 4 10 years of NLP history explained in 50 concepts | From Word2Vec, RNNs to GPT
  5. 5 From Attention to Generative Language Models - One line of code at a time!
  6. 6 Turns out Attention wasn't all we needed - How have modern Transformer architectures evolved?
  7. 7 Finetune LLMs to teach them ANYTHING with Huggingface and Pytorch | Step-by-step tutorial
  8. 8 Vision Transformers - The big picture of how and why it works so well.
  9. 9 Sparse Mixture of Experts - The transformer behind the most efficient LLMs (DeepSeek, Mixtral)
  10. 10 Building awesome Speech To Text Transformers from scratch - One line of Pytorch at a time!

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.