Computational Argumentation for Fair and Explainable AI Decision-making
Association for Computing Machinery (ACM) via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore how computational argumentation can enhance fairness and explainability in AI decision-making systems through this 51-minute conference talk presented by researchers from King's College London and University of Lincoln. Discover the theoretical foundations and practical applications of argumentation frameworks in addressing critical challenges of AI transparency and bias mitigation. Learn about innovative approaches that leverage formal argumentation structures to make AI reasoning processes more interpretable and equitable. Examine case studies and methodologies that demonstrate how computational argumentation can serve as a bridge between complex AI algorithms and human understanding, enabling stakeholders to better comprehend and evaluate automated decisions. Gain insights into cutting-edge research that combines argumentation theory with machine learning to create more trustworthy and accountable AI systems.
Syllabus
Computational Argumentation for Fair and Explainable AI Decision-making
Taught by
ACM FAccT Conference