AI Adoption - Drive Business Value and Organizational Impact
Introduction to Programming with Python
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a comprehensive research presentation examining critical safety vulnerabilities in AI-powered search engines and their potential for disseminating harmful content. Discover how researchers from The Hong Kong University of Science and Technology conducted the first systematic quantification of safety risks across seven production AI-powered search engines, revealing that these systems frequently generate responses containing malicious URLs even when processing benign queries. Learn about the methodology used to define threat models and risk types, including data collection from PhishTank, ThreatBook, and LevelBlue to evaluate how different query types affect response safety. Understand the comparative analysis between AI-powered search engines and traditional search engines, showing superior utility and safety performance in AI systems while highlighting persistent vulnerabilities. Examine two detailed case studies demonstrating real-world exploitation scenarios through online document spoofing and phishing attacks that successfully deceive AI search systems. Gain insights into the proposed mitigation strategies, including an innovative agent-based defense system incorporating GPT-4.1-based content refinement tools and URL detection mechanisms that effectively reduce safety risks with minimal impact on information availability. Understand the urgent implications for implementing robust safety measures in AI-powered search technologies and the broader cybersecurity landscape.
Syllabus
USENIX Security '25 - Unsafe LLM-Based Search: Quantitative Analysis and Mitigation of Safety...
Taught by
USENIX