Power BI Fundamentals - Create visualizations and dashboards from scratch
The Perfect Gift: Any Class, Never Expires
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore cutting-edge research in algorithmic learning theory through this conference session featuring five technical presentations from leading researchers. Delve into sample complexity bounds for linear constrained Markov Decision Processes using generative models, examining theoretical foundations for reinforcement learning in constrained environments. Investigate the complexity of vector-valued prediction problems, transitioning from linear models to broader stochastic convex optimization frameworks. Learn about smoothed online optimization techniques for target tracking applications, focusing on robust algorithms enhanced with machine learning approaches. Analyze the computational challenges of ranking systems when dealing with discrete user ratings and unknown preference thresholds. Discover advances in sparse nonparametric contextual bandit algorithms and their theoretical guarantees. Each presentation provides deep technical insights into modern algorithmic learning challenges, offering both theoretical analysis and practical implications for machine learning practitioners and researchers working in optimization, reinforcement learning, and online learning systems.
Syllabus
Sample Complexity Bounds for Linear Constrained MDPs with a Generative Model -9:52
Complexity of Vector-valued Prediction: From Linear Models to Stochastic Convex Optimization 9:55-
Smoothed Online Optimization for Target Tracking: Robust and Learning-Augmented Algorithms 21:33-
Ranking Items from Discrete Ratings: The Cost of Unknown User Thresholds 34:08-
Sparse Nonparametric Contextual Bandits 46:05-
Taught by
Fields Institute