An Automatic Finite-Sample Robustness Check: Can Dropping a Little Data Change Conclusions?
Paul G. Allen School via YouTube
Learn AI, Data Science & Business — Earn Certificates That Get You Hired
Learn EDR Internals: Research & Development From The Masters
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Watch a distinguished seminar from MIT's Tamara Broderick exploring the critical question of data robustness in statistical analysis. Learn about an innovative method to assess how removing small fractions of data can potentially change research conclusions. Broderick demonstrates how her automatic finite-sample robustness check can reveal whether influential findings might be driven by just a tiny subset of observations. The talk explains how this sensitivity is determined by signal-to-noise ratios rather than sample size or model misspecification. Through empirical examples, discover how several influential economics papers' conclusions can be altered by removing less than 1% of their data, while other analyses remain robust. This presentation offers valuable insights for researchers and practitioners concerned with the reliability and generalizability of data-driven conclusions across different populations or time periods.
Syllabus
Distinguished Seminar in Optimization & Data: Tamara Broderick (MIT)
Taught by
Paul G. Allen School