Controllable Safety Alignment - Inference-Time Adaptation to Diverse Safety Requirements
Center for Language & Speech Processing(CLSP), JHU via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a research presentation that introduces Controllable Safety Alignment (CoSA), a groundbreaking framework for adapting large language models to diverse safety requirements without retraining. Learn how this innovative approach moves beyond the one-size-fits-all safety paradigm by allowing models to adapt to different cultural norms and user needs through safety configs provided in natural language. Discover the CoSAlign method for aligning language models, understand the new controllability evaluation protocol using CoSA-Score, and examine the CoSApien benchmark for testing real-world applications. Delve into experimental results demonstrating significant improvements in controllability compared to existing methods, including in-context alignment, while understanding how this framework advances the practical implementation of pluralistic human values in AI systems.
Syllabus
Controllable Safety Alignment: Inference-Time Adaptation to Diverse Safety Requirements (Jack Zhang)
Taught by
Center for Language & Speech Processing(CLSP), JHU