Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Representing Small Floats for Machine Learning

The Julia Programming Language via YouTube

Overview

Why Pay Per Course When You Can Get All of Coursera for 40% Off?
10,000+ courses, Google, IBM & Meta certificates, one annual plan at 40% off. Upgrade now.
Get Full Access
Learn about the development of IEEE floating-point arithmetic formats specifically designed for machine learning applications in this conference talk from JuliaCon Global 2025. Explore the canonical approach to multi-format implementation of small floats and discover the unique advantages that Julia programming language offers for this specialized computational work. Gain insights from the Editor-in-Chief of the IEEE working group responsible for drafting the standard for floating-point arithmetic formats in machine learning (IEEE 3109), understanding both the technical challenges and solutions in representing numerical data efficiently for ML applications. Understand how different floating-point representations can impact machine learning performance and why Julia's features make it particularly well-suited for implementing these emerging standards in computational mathematics and artificial intelligence.

Syllabus

Representing Small Floats for Machine Learning | Sarnoff | JuliaCon Global 2025

Taught by

The Julia Programming Language

Reviews

Start your review of Representing Small Floats for Machine Learning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.