ICL Ciphers - Quantifying Learning in In-Context Learning via Substitution Ciphers
Center for Language & Speech Processing(CLSP), JHU via YouTube
Free courses from frontend to fullstack and AI
AI Adoption - Drive Business Value and Organizational Impact
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a novel research presentation that investigates how Large Language Models perform "learning" during In-Context Learning through cryptographic substitution ciphers. Discover the innovative ICL CIPHERS methodology that reformulates tasks by substituting tokens with irrelevant alternatives while maintaining reversible patterns, creating a unique framework to distinguish between task retrieval and genuine inference-time learning. Learn how bijective (reversible) cipher mappings reveal LLMs' capacity for deciphering latent patterns compared to non-bijective baselines, with consistent findings across four datasets and six different models. Examine the internal representations of language models and understand the evidence for their cipher-decoding capabilities, providing new insights into the dual modes of in-context learning and offering a quantitative approach to measure true "learning" versus pattern retrieval in modern AI systems.
Syllabus
ICL Ciphers: Quantifying "Learning’’ in In-Context Learning via Substitution Ciphers-EMNLP 2025 main
Taught by
Center for Language & Speech Processing(CLSP), JHU