Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This video demonstrates how even the latest large language models (LLMs) like GPT-4.1 struggle with decision-making and reasoning when confronted with logical inconsistencies in data inputs. Watch a live demonstration focusing on vulnerabilities that emerge when external data from databases, internet sources, or corporate files is imported through advanced information retrievers (the "R" in RAG) or data protocols. Learn why security RAG templates fail to provide adequate protection and how structured reasoning only works for scenarios with low and incoherent complexity. Based on research by Wenxiao Wang, Parsa Hosseini, and Soheil Feizi from the University of Maryland's Department of Computer Science, who authored "Chain-of-Defensive-Thought: Structured Reasoning Elicits Robustness in Large Language Models against Reference Corruption."
Syllabus
Security Breach w/ RAG: Demo w/ GPT-4.1
Taught by
Discover AI