HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data
University of Central Florida via YouTube
Save 43% on 1 Year of Coursera Plus
Power BI Fundamentals - Create visualizations and dashboards from scratch
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This research presentation from the University of Central Florida explores the HalluciDoctor framework, which addresses hallucinatory toxicity issues in visual instruction data for AI models. During the 22-minute talk, researchers discuss methods for identifying and mitigating harmful hallucinations that can occur when large language models process visual information. Learn about the challenges of visual instruction tuning and the innovative approaches developed to reduce toxic outputs while maintaining model performance. The presentation includes detailed slides available for reference, offering insights into this critical area of AI safety research.
Syllabus
Paper 2 : HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data
Taught by
UCF CRCV