Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about Rotary Positional Embeddings (RoPE), a fundamental method used in Transformer models across all modalities including text, images, and video. Explore how RoPE enables Transformers to understand positional relationships in input data, such as the order of text tokens in sentences or frame sequences in videos. Discover why this technique is essential for modern Large Language Models and gain hands-on experience through a complete PyTorch implementation. Master the mathematical foundations behind rotary embeddings and understand their practical applications in contemporary AI architectures.
Syllabus
RoPE | Explanation + PyTorch Implementation
Taught by
Outlier