HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data
University of Central Florida via YouTube
You’re only 3 weeks away from a new language
Most AI Pilots Fail to Scale. MIT Sloan Teaches You Why — and How to Fix It
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This research presentation from the University of Central Florida explores the HalluciDoctor framework, which addresses hallucinatory toxicity issues in visual instruction data for AI models. During the 22-minute talk, researchers discuss methods for identifying and mitigating harmful hallucinations that can occur when large language models process visual information. Learn about the challenges of visual instruction tuning and the innovative approaches developed to reduce toxic outputs while maintaining model performance. The presentation includes detailed slides available for reference, offering insights into this critical area of AI safety research.
Syllabus
Paper 2 : HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data
Taught by
UCF CRCV