Understanding Task Vectors in Vision-Language Models - Cross-Modal Representations
Discover AI via YouTube
Lead AI-Native Products with Microsoft's Agentic AI Program
Build the Finance Skills That Lead to Promotions — Not Just Certificates
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore groundbreaking research from UC Berkeley examining how vision-and-language models (VLMs) develop and employ "task vectors" - internal representations enabling cross-modal task performance. Dive into the discovery of how these latent activations capture task essences in a shared space across text and image modalities, allowing models to apply tasks learned in one format to queries in another. Learn about the three-phase query processing system where tokens evolve from raw inputs to task-specific representations and finally to answer-aligned vectors. Understand how combining instruction- and example-based task vectors creates more efficient representations for handling complex scenarios with limited data. Examine experimental evidence showing how text-based instruction vectors can guide image queries, leading to improved performance over traditional unimodal approaches. Discover the implications of this research for developing more adaptable and context-aware AI systems that use unified task embeddings for cross-modal inference.
Syllabus
Inside the VLM: NEW "Task Vectors" emerge (UC Berkeley)
Taught by
Discover AI