Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore how fundamental statistical principles can enhance machine learning systems to provide reliable black-box inference from untrusted data in this conference talk. Learn about two critical challenges in high-stakes AI applications: data scarcity and test-time distribution shift, and discover how these issues can lead to misleading conclusions and unexpected failures. Examine a framework that safely enhances sample efficiency of statistical inference procedures like conformal prediction and hypothesis testing by adaptively leveraging synthetic data from generative models, with distribution-free error control guarantees requiring no assumptions about synthetic data quality. See practical applications across diverse domains including protein structure prediction and win-rate evaluation of large reasoning models. Understand a new approach to test-time training grounded in sequential statistical testing, featuring conformal betting martingales for principled data drift detection and an anti-drift correction mechanism based on optimal transport principles. Discover how this mechanism forms the foundation of a self-training scheme that promotes invariance to dynamically changing environments, combining theoretical rigor with practical utility for trustworthy AI systems.