Reversible Transformer - GPU Memory Optimization Using ReFORMER and Reversible Residual Layers
Discover AI via YouTube
Start speaking a new language. It’s just 3 weeks away.
Get 35% Off CFI Certifications - Code CFI35
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about advanced GPU memory optimization techniques for Vision Transformers in this technical video that explores the theory of splitting activations, locality-sensitive hashing attention as an alternative to dot-product attention, and reversible residual layers. Dive deep into the innovative concept of Reversible Residual Layers (RRL), which revolutionizes memory efficiency by enabling single storage of activations during training instead of storing them N times for N layers. Understand how RRLs achieve reversible computation through invertible operations and residual connections, making deeper neural networks more practical by significantly reducing memory requirements. Explore real-world applications across image classification, object detection, and machine translation while examining key research papers including the REFORMER architecture, Deep Residual Learning, and The Reversible Residual Network. Master the fundamental concepts of residual connections and their implementation in modern deep learning architectures through detailed technical explanations and referenced academic works.
Syllabus
Reversible Transformer: ReFORMER for GPU Memory Optimization! Reversible Residual Layers?
Taught by
Discover AI