Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
In this 16-minute conference talk from Conf42 ML 2025, Georgina Tryfou explores the critical role of AI in combating harmful content online. Discover how modern content moderation systems are designed to scale with increasing digital communication volumes. The presentation begins with an introduction to the speaker's background before delving into why AI has become essential for content moderation. Learn about the complex challenges involved in detecting harmful content across different contexts and languages. Examine a comprehensive system architecture overview followed by a detailed breakdown of the moderation pipeline. Understand the specific AI models employed and their unique functions within the safety ecosystem. Explore real-world applications and integration possibilities with existing platforms. The talk also addresses important ethical considerations and privacy concerns when implementing these systems. Get insights into upcoming developments in the field and potential innovations on the horizon. Conclude with valuable takeaways and lessons learned from implementing AI safety systems in production environments.
Syllabus
00:00 Introduction and Speaker Background
00:39 The Importance of AI in Content Moderation
01:36 Challenges in Detecting Harmful Content
03:25 System Architecture Overview
03:50 Detailed Pipeline Breakdown
08:06 Model Specifics and Their Roles
09:38 Real-World Applications and Integrations
10:59 Ethical Considerations and Privacy
12:59 Future Developments and Innovations
14:57 Key Takeaways and Lessons Learned
15:56 Conclusion and Final Thoughts
Taught by
Conf42