Get 20% off all career paths from fullstack to AI
Finance Certifications Goldman Sachs & Amazon Teams Trust
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore a 33-minute conference talk on identifying representations for intervention extrapolation presented by Sorawit (James) Saengkyongam from Valence Labs. Delve into the concept of identifiable and causal representation learning for improving generalizability and robustness in machine learning. Examine the task of intervention extrapolation, which involves predicting the effects of unseen interventions on outcomes. Learn about the setup involving outcome Y, observed features X, latent features Z, and exogenous action variables A. Discover how identifiable representations can provide effective solutions for non-linear intervention effects. Understand the Rep4Ex approach, which combines intervention extrapolation with identifiable representation learning. Explore the theoretical findings on identifiability and the proposed method for enforcing linear invariance constraints. Follow along as the speaker validates the theoretical findings through synthetic experiments and demonstrates the success of the approach in predicting unseen intervention effects. Engage with the Q&A session to gain further insights into this cutting-edge research in causal representation learning.
Syllabus
- Introduction
- Intervention Extrapolation with Observed Z
- Intervention Extrapolation via Identifiable Representations
- Identification of the Unmixing Function
- Simulations
- Q+A
Taught by
Valence Labs