From Large Language Models to Large Multimodal Models - Stanford CS25 - Lecture 4
Stanford University via YouTube
Google Data Analytics, IBM AI & Meta Marketing — All in One Subscription
Finance Certifications Goldman Sachs & Amazon Teams Trust
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the evolution from large language models to large multimodal models in this Stanford University lecture. Delve into the basics of large language models and examine the academic community's efforts in developing multimodal models over the past year. Learn about CogVLM, a powerful open-source multimodal model with 17B parameters, and CogAgent, designed for GUI and OCR scenarios. Discover applications of multimodal models and potential research directions in academia. Speaker Ming Ding, a research scientist at Zhipu AI, shares insights on multimodal generative models, understanding models, and language models. Gain valuable knowledge about the integration of visual perception with language model capabilities in this 1 hour and 20 minute presentation from the Stanford CS25 Transformers United series.
Syllabus
Stanford CS25: V4 I From Large Language Models to Large Multimodal Models
Taught by
Stanford Online