Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Watch a technical lecture where Dr. Shubhada Agrawal, a postdoctoral researcher from CMU, presents groundbreaking research on estimating asymptotic variance in Markov chains using stochastic approximation. Explore the development of the first recursive estimator that achieves optimal O(1/n) convergence rate while requiring minimal computation and storage. Learn how this innovative approach improves upon existing methods by eliminating the need for historical sample storage and prior run-length knowledge. Discover applications in average reward reinforcement learning, including variance-constrained policy evaluation for safety-critical systems. Delve into extensions covering vector-valued functions, stationary variance estimation, and large state space implementations. Gain insights from Dr. Agrawal's expertise in applied probability and sequential decision-making, developed through her academic journey from IIT Delhi to her current research at CMU.