Building Video Recommendations with Twelve Labs and Qdrant
Qdrant - Vector Database & Search Engine via YouTube
MIT Sloan AI Adoption: Build a Playbook That Drives Real Business ROI
AI Engineer - Learn how to integrate AI into software applications
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
In this 45-minute demo, Hrishikesh Yadav from Twelve Labs demonstrates a content recommendation application that leverages multimodal video understanding and vector search capabilities. Learn how to extract rich, multimodal embeddings from video content (including visuals, audio, scenes, and contextual information) using the Twelve Labs Embed API, and how to efficiently store and search these embeddings with Qdrant vector database for semantic retrieval. Discover how to build recommendation systems based on the actual content within videos rather than just metadata or tags. This presentation is ideal for developers building video AI applications, engineers exploring multimodal retrieval solutions, and anyone interested in more sophisticated video search capabilities beyond traditional methods. Access the complete project through the available live demo, accompanying blog post, and GitHub repository to implement similar functionality in your own applications.
Syllabus
Vector Space Talk: Video Recommendations with Twelve Labs
Taught by
Qdrant - Vector Database & Search Engine