Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore cutting-edge research on AI persona agents through this 38-minute conference presentation examining two pivotal studies from Stanford University and UT Austin. Delve into "Probing Belief Formation in Role-Primed LLM Agents" by researchers from Stanford's Department of Biomedical Data Science, which investigates how large language models develop and maintain beliefs when assigned specific roles. Examine the second study on "Harmful Traits of AI Companions" conducted by an interdisciplinary team from UT Austin's departments of Computer Science, Communication Studies, Psychology, English, and Law, alongside collaborators from Technology & Information Policy Institute, UT School of Law, El Colegio Mexiquense, and Sony AI. Gain insights into the mechanisms behind contextual instantiation of AI personas, understand the implications of role-priming in language models, and learn about potential risks and harmful characteristics that can emerge in AI companion systems. Discover how these research findings contribute to the broader understanding of AI agent behavior, belief systems, and the ethical considerations surrounding AI persona development in various applications.
Syllabus
Contextual Instantiation of AI Persona Agents (Stanford)
Taught by
Discover AI