Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

RoPE - Rotary Positional Embeddings Explanation and PyTorch Implementation

Outlier via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about Rotary Positional Embeddings (RoPE), a fundamental method used in Transformer models across all modalities including text, images, and video. Explore how RoPE enables Transformers to understand positional relationships in input data, such as the order of text tokens in sentences or frame sequences in videos. Discover why this technique is essential for modern Large Language Models and gain hands-on experience through a complete PyTorch implementation. Master the mathematical foundations behind rotary embeddings and understand their practical applications in contemporary AI architectures.

Syllabus

RoPE | Explanation + PyTorch Implementation

Taught by

Outlier

Reviews

Start your review of RoPE - Rotary Positional Embeddings Explanation and PyTorch Implementation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.