Amazon Nova Multimodal Embeddings - Cross-Modal Search and Retrieval with Unified Embedding Models
AWS Events via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore Amazon Nova Multimodal Embeddings in this 39-minute AWS Show and Tell session that demonstrates how to unlock value from unstructured data by connecting information across multiple content types using a single unified model. Discover the capabilities of Amazon's first embedding model that supports text, documents, images, video, and audio through one integrated solution, eliminating the need to manage multiple specialized models. Learn to efficiently convert diverse content types into embeddings within a unified semantic space and see practical demonstrations of mixed-modality content handling, from documents with interleaved text and images to videos containing visual, audio, and text elements. Watch implementations of reference-based image search, document retrieval, and other cross-modal applications while gaining insights into powering multimodal applications and AI agents, implementing cross-modal search and retrieval systems, and effectively handling mixed-modality content for enhanced AI-driven solutions.
Syllabus
Nova Multimodal Embeddings Session | AWS Show and Tell - Generative AI
Taught by
AWS Events