BOLD - Dataset and Metrics for Measuring Biases in Open-Ended Language Generation
Association for Computing Machinery (ACM) via YouTube
2,000+ Free Courses with Certificates: Coding, AI, SQL, and More
Get 20% off all career paths from fullstack to AI
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore a comprehensive research presentation on measuring biases in open-ended language generation through the BOLD dataset and associated metrics. Delve into the work of J. Dhamala, T. Sun, V. Kumar, S. Krishna, Y. Pruksachatkun, K. Chang, and R. Gupta as they discuss their innovative approach to identifying and quantifying biases in AI-generated text. Learn about the development of the BOLD dataset, its structure, and the metrics designed to evaluate various forms of bias in language models. Gain insights into the implications of this research for creating more equitable and responsible AI systems. This 18-minute conference talk, presented at the virtual FAccT 2021 conference, offers valuable knowledge for researchers, data scientists, and AI ethicists working on fairness and accountability in machine learning and natural language processing.
Syllabus
BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation
Taught by
ACM FAccT Conference