Foundry Local - Cutting-Edge AI Experiences on Device with ONNX Runtime and Olive
AI Engineer via YouTube
35% Off Finance Skills That Get You Hired - Code CFI35
Our career paths help you become job ready faster
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore cutting-edge on-device AI experiences through this 23-minute conference talk from the AI Engineer World's Fair. Learn how to leverage ONNX Runtime and Olive to build powerful local AI applications that run directly on user devices without requiring cloud connectivity. Discover Microsoft's approach to AI model operationalization and acceleration, focusing on open and interoperable AI solutions. Gain insights into the technical frameworks and tools that enable efficient AI inference on edge devices, including optimization techniques for model deployment and performance enhancement. Understand the practical applications and business benefits of on-device AI processing, from improved privacy and reduced latency to enhanced user experiences. The presentation covers real-world implementation strategies for integrating ONNX Runtime and Olive into your AI development workflow, making it valuable for developers, engineers, and product managers working on AI-powered applications that prioritize local processing capabilities.
Syllabus
Foundry Local: Cutting-Edge AI experiences on device with ONNX Runtime/Olive — Emma Ning, Microsoft
Taught by
AI Engineer