Coursera Plus Annual Nearly 45% Off
AI Engineer - Learn how to integrate AI into software applications
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about detecting unauthorized use of large language models through innovative fingerprinting techniques in this 15-minute workshop talk by Anshul Nasery from the Paul G. Allen School of Computer Science & Engineering. Explore how LLM fingerprinting can serve as a crucial security measure to identify when proprietary language models are being misused or accessed without permission. Discover the technical approaches and methodologies used to embed unique identifiers within language models that can later be detected to prove ownership or unauthorized usage. Gain insights into the growing field of AI security and intellectual property protection as it relates to large language models, understanding both the challenges and solutions in preventing model theft and misuse in an era where AI models represent significant commercial and research value.
Syllabus
IFDS Workshop Short Talks–Detecting Model Misuse Through LLM fingerprinting
Taught by
Paul G. Allen School