Quantifying and Reducing Gender Stereotypes in Word Embeddings
Association for Computing Machinery (ACM) via YouTube
Google AI Professional Certificate - Learn AI Skills That Get You Hired
Master AI & Data—50% Off Udacity (Code CC50)
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore gender stereotypes in word embeddings and learn techniques to quantify and reduce bias in this hands-on tutorial from the FAT* 2018 conference. Dive into the basics of word embedding learning and applications, then gain practical experience writing programs to display and measure gender stereotypes in these widely-used natural language processing tools. Discover methods to mitigate bias and create fairer algorithmic decision-making processes. Work with iPython notebooks to explore real-world examples and complete exercises that reinforce concepts of fairness in machine learning and natural language processing.
Syllabus
FAT* 2018 Hands-on Tutorial: Quantifying and Reducing Gender Stereotypes in Word Embeddings
Taught by
ACM FAccT Conference