Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to install and test Gemma 3n, Google DeepMind's open multimodal AI model designed for mobile deployment, through hands-on experimentation in a Google Colab notebook. Explore the model's capabilities across multiple modalities including text, images, audio, and video processing. Set up the development environment and configure the necessary dependencies to run Gemma 3n effectively. Test the model's text generation abilities with various prompts to understand its language processing capabilities. Examine the image understanding features by feeding visual content and analyzing the model's interpretation and response accuracy. Evaluate video processing functionality to assess how well the model handles temporal visual data and extracts meaningful information from video content. Compare the model's performance relative to its compact size and mobile-optimized architecture to determine if it delivers exceptional results despite resource constraints. Access the complete implementation through the provided Google Colab notebook and explore the official developer documentation and model weights on Hugging Face for deeper technical understanding.
Syllabus
00:00 - Welcome
03:15 - Notebook setup
06:26 - Text prompt
07:13 - Image understanding
11:50 - Video understanding
13:59 - Conclusion
Taught by
Venelin Valkov