ICL Ciphers - Quantifying Learning in In-Context Learning via Substitution Ciphers
Center for Language & Speech Processing(CLSP), JHU via YouTube
Python, Prompt Engineering, Data Science — Build the Skills Employers Want Now
The Most Addictive Python and SQL Courses
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore a novel research presentation that investigates how Large Language Models perform "learning" during In-Context Learning through cryptographic substitution ciphers. Discover the innovative ICL CIPHERS methodology that reformulates tasks by substituting tokens with irrelevant alternatives while maintaining reversible patterns, creating a unique framework to distinguish between task retrieval and genuine inference-time learning. Learn how bijective (reversible) cipher mappings reveal LLMs' capacity for deciphering latent patterns compared to non-bijective baselines, with consistent findings across four datasets and six different models. Examine the internal representations of language models and understand the evidence for their cipher-decoding capabilities, providing new insights into the dual modes of in-context learning and offering a quantitative approach to measure true "learning" versus pattern retrieval in modern AI systems.
Syllabus
ICL Ciphers: Quantifying "Learning’’ in In-Context Learning via Substitution Ciphers-EMNLP 2025 main
Taught by
Center for Language & Speech Processing(CLSP), JHU