Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore a 46-minute conference talk from the 38th Chaos Communication Congress (38C3) examining the discriminatory impacts of algorithmic decision-making systems in social welfare programs across multiple countries. Learn about Amnesty International's research findings on automated welfare systems in the Netherlands, India, Serbia, and Denmark, revealing how these technologies often perpetuate existing biases and injustices rather than improving fairness. Discover how the Dutch fraud detection algorithm explicitly used nationality as a risk factor, leading to unjustified benefit cuts, and examine Denmark's comprehensive system that processes vast amounts of personal data through multiple algorithms, potentially qualifying as a prohibited social scoring system under EU AI laws. Understand the challenges of automation bias, data integrity issues, increased surveillance, and human rights violations in welfare systems, particularly affecting vulnerable populations who depend on social benefits for survival.