Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

freeCodeCamp

Evolution of the Transformer Architecture Used in LLMs (2017-2025) - Full Course

via freeCodeCamp

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the evolution of Transformer architecture in large language models through this comprehensive 2-hour 50-minute course designed for beginners using a step-by-step teaching approach. Discover the latest advancements that have enhanced the accuracy, efficiency, and scalability of Transformers from 2017 to 2025. Learn various techniques for encoding positional information, examine different types of attention mechanisms, understand normalization methods and their optimal placement, and study commonly used activation functions. Master the fundamental concepts through detailed explanations of positional encoding, attention mechanisms, and small refinements before putting everything together in practical applications. Access accompanying slides, notebooks, and scripts through the provided GitHub repository to reinforce your learning and apply the concepts hands-on.

Syllabus

0:00:00 Course Overview
0:03:24 Introduction
0:05:13 Positional Encoding
1:02:23 Attention Mechanisms
2:18:04 Small Refinements
2:42:19 Putting Everything Together
2:47:47 Conclusion

Taught by

freeCodeCamp.org

Reviews

Start your review of Evolution of the Transformer Architecture Used in LLMs (2017-2025) - Full Course

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.