Harmful Speech Detection by Language Models - Gender-Queer Dialect Bias
Association for Computing Machinery (ACM) via YouTube
Gain a Splash of New Skills - Coursera+ Annual Nearly 45% Off
Learn Backend Development Part-Time, Online
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Watch a 19-minute ACM conference presentation examining how language models demonstrate bias in detecting harmful speech when analyzing gender-queer dialects. Explore research findings from Rebecca Dorn, Lee Kezar, Fred Morstatter, and Kristina Lerman that reveal systematic biases in how AI systems process and flag potentially harmful content across different gender expressions and linguistic patterns.
Syllabus
Harmful Speech Detection by Language Models Exhibits Gender Queer Dialect Bias
Taught by
Association for Computing Machinery (ACM)