Deploy Local AI Models in Enterprise with Windows ML

Deploy Local AI Models in Enterprise with Windows ML

Microsoft Ignite via YouTube Direct link

0:00 - Session Introduction and Welcome to BRK329: Deploying Local AI Models with Windows ML

1 of 9

1 of 9

0:00 - Session Introduction and Welcome to BRK329: Deploying Local AI Models with Windows ML

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Deploy Local AI Models in Enterprise with Windows ML

Automatically move to the next video in the Classroom when playback concludes

  1. 1 0:00 - Session Introduction and Welcome to BRK329: Deploying Local AI Models with Windows ML
  2. 2 00:02:35 - Why Local AI Matters – Privacy, Security, and Performance Benefits
  3. 3 00:12:19 - Registering Execution Providers with ONNX Runtime
  4. 4 00:12:33 - Selecting QNN NPU Execution Provider
  5. 5 00:14:05 - Debugging Execution Provider Registration and Device Readiness
  6. 6 00:23:50 - Encoding Prompts and Setting Up Local Model Generator
  7. 7 00:25:01 - Live GPU Image Generation Demo Using Windows ML
  8. 8 00:34:16 - Developers can focus on app logic while Windows ML abstracts model-to-hardware operations
  9. 9 00:38:44 - CPU-based inference comparison and lightweight deployment flexibility

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.