Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Machine Learning Primitives as Algebraic Effects

ACM SIGPLAN via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore how Haskell's algebraic effects can revolutionize machine learning system design in this 17-minute conference talk from the Haskell 2025 symposium. Discover a novel approach to treating machine learning primitives—including backpropagation, parameter initialization, regularization, stochasticity, and normalization—as distinct algebraic effects tracked in row types rather than tangled, rigid implementations. Learn how type signatures can directly capture a model's operational requirements while handlers provide different semantics for training, evaluation, and prediction stages. Examine how automatic differentiation functions as an algebraic effect, enabling systematic reasoning about machine learning systems similar to how Haskell handles IO, nondeterminism, state, and aggregation. See how extensible effects create opportunities for modular composition with selective elimination and experimentation, allowing novel operations like hyper-parameter nudging to emerge naturally as definable effects. Watch a demonstration of a working toy implementation that illustrates these concepts, while understanding that this research focuses on reframing machine learning paradigms through Haskell's type system rather than delivering a polished library. Gain insights into how this systematic investigation opens new directions for machine learning system architecture and design patterns.

Syllabus

[Haskell'25] Machine Learning Primitives as Algebraic Effects

Taught by

ACM SIGPLAN

Reviews

Start your review of Machine Learning Primitives as Algebraic Effects

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.