Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

DeepLearning.AI

Building Multimodal Data Pipelines

DeepLearning.AI and Snowflake via Coursera

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Images, audio, and video make up a growing share of the data companies generate today, but most pipelines are still built for structured data alone. This course teaches you to build AI-powered pipelines that process multimodal data and turn it into LLM-ready text. You’ll start with the foundations: using ASR to extract transcripts from audio and turning images into LLM-ready text descriptions. From there, you’ll see how Vision Language Models generate descriptions from video segments, capturing not just what’s visible in a single frame, but what unfolds across a scene over time. You’ll then apply these skills to implement a multimodal RAG pipeline that searches across slides, audio, and video from meetings to answer questions about their content. By combining all three modalities, you give LLMs the rich context they need to deliver detailed answers across complex, real-world content. In detail, you’ll: Survey the multimodal data landscape, the unique challenges each data type presents, and the techniques that transform unstructured content into searchable text. Apply OCR and ASR to convert images and audio into structured text, then embed them into a unified vector space for cross-modal semantic search. Prompt Vision Language Models effectively, and choose the right frame sampling and embedding strategy for video. Run a Vision Language Model on meeting videos to generate timestamped segment descriptions, then embed them alongside audio and slides for unified semantic, and time-based search. Build a multimodal RAG system that retrieves across audio, slides, and video to generate grounded, cited answers from meeting recordings. Every technique you’ll learn serves the same goal data engineers have always had: take messy, unstructured data and turn it into something you can query, analyze, and build on.

Syllabus

  • Building Multimodal Data Pipelines
    • Images, audio, and video make up a growing share of the data companies generate today, but most pipelines are still built for structured data alone. This course teaches you to build AI-powered pipelines that process multimodal data and turn it into LLM-ready text.You’ll start with the foundations: using ASR to extract transcripts from audio and turning images into LLM-ready text descriptions. From there, you’ll see how Vision Language Models generate descriptions from video segments, capturing not just what’s visible in a single frame, but what unfolds across a scene over time. You’ll then apply these skills to implement a multimodal RAG pipeline that searches across slides, audio, and video from meetings to answer questions about their content. By combining all three modalities, you give LLMs the rich context they need to deliver detailed answers across complex, real-world content.In detail, you’ll: (1) Survey the multimodal data landscape, the unique challenges each data type presents, and the techniques that transform unstructured content into searchable text. (2) Apply OCR and ASR to convert images and audio into structured text, then embed them into a unified vector space for cross-modal semantic search. (3) Prompt Vision Language Models effectively, and choose the right frame sampling and embedding strategy for video. (4) Run a Vision Language Model on meeting videos to generate timestamped segment descriptions, then embed them alongside audio and slides for unified semantic, and time-based search. (5) Build a multimodal RAG system that retrieves across audio, slides, and video to generate grounded, cited answers from meeting recordings.Every technique you’ll learn serves the same goal data engineers have always had: take messy, unstructured data and turn it into something you can query, analyze, and build on.

Taught by

Gilberto Hernandez

Reviews

Start your review of Building Multimodal Data Pipelines

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.