Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This weekly AI seminar explores continual post-training for large language models, addressing the challenge of teaching LLMs new tasks without compromising existing knowledge. Learn about a practical method for post-training continual learning that enables full-model fine-tuning without increasing model size or degrading general capabilities. The presentation focuses on constraining updates to carefully selected low-rank subspaces, allowing models to adapt while preserving past knowledge. Access the related research paper, blog post, and code repository to implement these techniques in your own AI development work. Part of the "Random Samples" series that bridges cutting-edge AI research with practical applications for developers, data scientists, and researchers.
Syllabus
Random Samples: Continual Post-Training
Taught by
Neural Magic