Do Pretrained Transformers Learn In-Context by Gradient Descent?
Center for Language & Speech Processing(CLSP), JHU via YouTube
Finance Certifications Goldman Sachs & Amazon Teams Trust
Google Data Analytics, IBM AI & Meta Marketing — All in One Subscription
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore a 15-minute conference talk presented by Aayush Mishra at ICML 2024, examining the relationship between In-Context Learning (ICL) and Gradient Descent (GD) in pre-trained language models. Delve into the limitations of previous theoretical connections between ICL and GD, highlighting the differences between experimental setups and real-world language model training. Analyze the speaker's findings on the divergent sensitivities of ICL and GD to demonstration order, and examine comprehensive empirical analyses conducted on the LLaMa-7B model. Gain insights into how ICL and GD differently modify output distributions in language models, and understand why the equivalence between these two concepts remains an open hypothesis requiring further investigation.
Syllabus
Do pretrained Transformers Learn In-Context by Gradient Descent? Aayush Mishra (ICML 2024)
Taught by
Center for Language & Speech Processing(CLSP), JHU