Harmful Speech Detection by Language Models - Gender-Queer Dialect Bias
Association for Computing Machinery (ACM) via YouTube
Google, IBM & Meta Certificates — 40% Off for a Limited Time
MIT Sloan AI Adoption: Build a Playbook That Drives Real Business ROI
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Watch a 19-minute ACM conference presentation examining how language models demonstrate bias in detecting harmful speech when analyzing gender-queer dialects. Explore research findings from Rebecca Dorn, Lee Kezar, Fred Morstatter, and Kristina Lerman that reveal systematic biases in how AI systems process and flag potentially harmful content across different gender expressions and linguistic patterns.
Syllabus
Harmful Speech Detection by Language Models Exhibits Gender Queer Dialect Bias
Taught by
Association for Computing Machinery (ACM)