Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding
University of Central Florida via YouTube
Google, IBM & Microsoft Certificates — All in One Plan
The Fastest Way to Become a Backend Developer Online
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn about innovative techniques for reducing object hallucinations in Large Vision Language Models (LVLMs) through a 27-minute research presentation from the University of Central Florida. Explore the Visual Contrastive Decoding methodology and its implementation for improving the accuracy and reliability of vision-language models. Examine how this approach helps minimize false object detection and improves the overall performance of LVLMs in real-world applications. Gain insights into the technical aspects of vision-language processing and the latest advancements in reducing AI hallucinations.
Syllabus
Paper 1: Mitigating Object Hallucinations in LVLMs through Visual Contrastive Decoding
Taught by
UCF CRCV