Finance Certifications Goldman Sachs & Amazon Teams Trust
Learn Backend Development Part-Time, Online
Overview
Why Pay Per Course When You Can Get All of Coursera for 40% Off?
10,000+ courses, Google, IBM & Meta certificates, one annual plan at 40% off. Upgrade now.
Get Full Access
Learn about the development of IEEE floating-point arithmetic formats specifically designed for machine learning applications in this conference talk from JuliaCon Global 2025. Explore the canonical approach to multi-format implementation of small floats and discover the unique advantages that Julia programming language offers for this specialized computational work. Gain insights from the Editor-in-Chief of the IEEE working group responsible for drafting the standard for floating-point arithmetic formats in machine learning (IEEE 3109), understanding both the technical challenges and solutions in representing numerical data efficiently for ML applications. Understand how different floating-point representations can impact machine learning performance and why Julia's features make it particularly well-suited for implementing these emerging standards in computational mathematics and artificial intelligence.
Syllabus
Representing Small Floats for Machine Learning | Sarnoff | JuliaCon Global 2025
Taught by
The Julia Programming Language