Trustworthy LLMs - Mitigating Issues in Social Bias, Safety and Reliability
Center for Language & Speech Processing(CLSP), JHU via YouTube
Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Join a thought-provoking lecture by USC researcher Jieyu Zhao exploring the critical challenges and solutions in developing trustworthy large language models (LLMs). Delve into cutting-edge research on auditing models, detecting and mitigating social biases, and understanding LLM decision-making processes. Learn about the dual nature of LLMs - their potential for positive societal impact through enhanced accessibility, communication, disaster response, and public health initiatives, while also examining crucial concerns around accountability, fairness, and transparency. Discover ongoing efforts to create more inclusive and ethically sound LLM practices, contributing to a broader dialogue on responsible AI development and deployment.
Syllabus
Trustworthy LLMs — our efforts on mitigating issues regarding social bias, safety and reliability
Taught by
Center for Language & Speech Processing(CLSP), JHU