Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Run LLMs Locally with React Native and MLC

Callstack Engineers via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to implement local large language model execution in React Native applications using the MLC LLM Engine and the react-native-ai package. Discover how Callstack engineers achieved on-device AI inference without cloud dependencies, enabling real-time text generation directly on mobile devices. Explore the integration of MLC's high-performance inference capabilities with React Native, including practical implementation of model switching using Vercel's AI SDK. Examine the technical architecture behind running LLMs locally on both iOS and Android platforms, understanding the performance optimizations and trade-offs involved in edge AI deployment. Gain insights into developer challenges, implementation gotchas, and upcoming improvements in the react-native-ai ecosystem. Understand the strategic advantages of local AI execution including enhanced privacy, improved performance, and offline resilience for mobile applications. Follow along with hands-on demonstrations showing real-world applications of on-device AI inference and learn best practices for integrating local LLM capabilities into React Native projects.

Syllabus

Run LLMs Locally with React Native & MLC

Taught by

Callstack Engineers

Reviews

Start your review of Run LLMs Locally with React Native and MLC

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.