Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Dehumanizing Agents - Why Explainability is Crucial in the LLM Era

NDC Conferences via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical importance of explainable AI in the era of large language models through this comprehensive conference talk that addresses the inherent biases and limitations of artificial intelligence systems. Examine how AI models inherit human biases from training data and learn why explainability becomes essential when LLMs confidently present information, even when generating inaccurate content. Discover the evolution from traditional machine learning explainability methods to novel approaches specifically designed for generative AI systems. Delve into various implementation methods ranging from established techniques to cutting-edge research proposals, while understanding the main risks and challenges encountered when implementing explainability in AI systems. Learn practical solutions for overcoming these challenges and gain valuable insights into integrating explanations into LLM-based workflows, including strategies for working with third-party services that lack native explainability features. Master the techniques needed to break open the "black box" of AI models and understand their decision-making processes in an era where trust in AI output is paramount.

Syllabus

Dehumanizing Agents: Why Explainability is Crucial in the LLM Era - Lucía Conde-Moreno

Taught by

NDC Conferences

Reviews

Start your review of Dehumanizing Agents - Why Explainability is Crucial in the LLM Era

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.