Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Self-Incorrect - LLMs Struggle with Discriminating Self-Generated Responses

Center for Language & Speech Processing(CLSP), JHU via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Watch a 12-minute research presentation from Johns Hopkins University's Center for Language & Speech Processing that examines whether Large Language Models (LLMs) can effectively improve their own outputs through self-discrimination. Learn about a unified framework developed to compare generative and discriminative capabilities of LLMs, and discover key findings that challenge the assumption that these models can enhance their performance through self-judgment alone. Explore experimental analyses conducted on various open-source and industrial LLMs, revealing that models do not consistently perform better at discriminating between previously-generated alternatives compared to generating initial responses. Presented by researcher Dongwei Jiang, this talk discusses findings from their paper investigating the limitations of LLMs in self-correction and discrimination tasks.

Syllabus

Self-(In)Correct: LLMs Struggle with Discriminating Self-Generated Responses --- AAAI 2025

Taught by

Center for Language & Speech Processing(CLSP), JHU

Reviews

Start your review of Self-Incorrect - LLMs Struggle with Discriminating Self-Generated Responses

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.