Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

ICL Ciphers - Quantifying Learning in In-Context Learning via Substitution Ciphers

Center for Language & Speech Processing(CLSP), JHU via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a novel research presentation that investigates how Large Language Models perform "learning" during In-Context Learning through cryptographic substitution ciphers. Discover the innovative ICL CIPHERS methodology that reformulates tasks by substituting tokens with irrelevant alternatives while maintaining reversible patterns, creating a unique framework to distinguish between task retrieval and genuine inference-time learning. Learn how bijective (reversible) cipher mappings reveal LLMs' capacity for deciphering latent patterns compared to non-bijective baselines, with consistent findings across four datasets and six different models. Examine the internal representations of language models and understand the evidence for their cipher-decoding capabilities, providing new insights into the dual modes of in-context learning and offering a quantitative approach to measure true "learning" versus pattern retrieval in modern AI systems.

Syllabus

ICL Ciphers: Quantifying "Learning’’ in In-Context Learning via Substitution Ciphers-EMNLP 2025 main

Taught by

Center for Language & Speech Processing(CLSP), JHU

Reviews

Start your review of ICL Ciphers - Quantifying Learning in In-Context Learning via Substitution Ciphers

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.