Benchmarking MLLMs for Embodied Decision Making and Cognitive World Modeling
USC Information Sciences Institute via YouTube
AI Adoption - Drive Business Value and Organizational Impact
The Most Addictive Python and SQL Courses
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Attend this research seminar to explore systematic benchmarking approaches for evaluating Multimodal Large Language Models (MLLMs) in embodied agent environments. Learn about the challenges in understanding MLLM performance across different domains and applications, where models are typically evaluated using varying inputs, outputs, and purposes. Discover two comprehensive benchmarks designed to address these evaluation gaps through systematic definitions and standardized interfaces that enable formalized assessment of MLLM-based embodied agent capabilities. Examine scalable data collection methodologies and verifiable evaluation techniques built on simulation environments that provide robust testing frameworks. Gain insights into spatial cognition and decision-making processes in embodied AI systems, understanding how MLLMs can be effectively measured and compared across different embodied learning scenarios. Explore the intersection of multimodality research combining vision and language processing with practical applications in embodied agent development, presented by a researcher actively contributing to foundation models and embodied AI through workshop organization and award-winning research contributions.
Syllabus
Benchmarking MLLMs for Embodied Decision Making and Cognitive World Modeling
Taught by
USC Information Sciences Institute