Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This 35-minute talk by Riccardo Cadei from INRIA/ISTA, presented at the Institut des Hautes Etudes Scientifiques (IHES), explores how machine learning can overcome data annotation limitations in scientific research through Prediction-Powered Causal Inferences (PPCI). Learn about estimating treatment effects using unlabeled factual outcomes retrieved zero-shot from pre-trained models. Discover the conditional calibration property that ensures valid PPCI at the population level, and understand the novel "causal lifting" constraint that enables validity transfer across experiments. Explore Deconfounded Empirical Risk Minimization, a model-agnostic training objective that outperforms traditional Empirical Risk Minimization and invariant training approaches. See practical applications on synthetic and real-world scientific datasets, including the first successful zero-shot PPCI implementation on the ISTAnt dataset using a fine-tuned foundational model.