Foundry Local - Cutting-Edge AI Experiences on Device with ONNX Runtime and Olive
AI Engineer via YouTube
Most AI Pilots Fail to Scale. MIT Sloan Teaches You Why — and How to Fix It
Stuck in Tutorial Hell? Learn Backend Dev the Right Way
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore cutting-edge on-device AI experiences through this 23-minute conference talk from the AI Engineer World's Fair. Learn how to leverage ONNX Runtime and Olive to build powerful local AI applications that run directly on user devices without requiring cloud connectivity. Discover Microsoft's approach to AI model operationalization and acceleration, focusing on open and interoperable AI solutions. Gain insights into the technical frameworks and tools that enable efficient AI inference on edge devices, including optimization techniques for model deployment and performance enhancement. Understand the practical applications and business benefits of on-device AI processing, from improved privacy and reduced latency to enhanced user experiences. The presentation covers real-world implementation strategies for integrating ONNX Runtime and Olive into your AI development workflow, making it valuable for developers, engineers, and product managers working on AI-powered applications that prioritize local processing capabilities.
Syllabus
Foundry Local: Cutting-Edge AI experiences on device with ONNX Runtime/Olive — Emma Ning, Microsoft
Taught by
AI Engineer