Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Detecting Model Misuse Through LLM Fingerprinting

Paul G. Allen School via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about detecting unauthorized use of large language models through innovative fingerprinting techniques in this 15-minute workshop talk by Anshul Nasery from the Paul G. Allen School of Computer Science & Engineering. Explore how LLM fingerprinting can serve as a crucial security measure to identify when proprietary language models are being misused or accessed without permission. Discover the technical approaches and methodologies used to embed unique identifiers within language models that can later be detected to prove ownership or unauthorized usage. Gain insights into the growing field of AI security and intellectual property protection as it relates to large language models, understanding both the challenges and solutions in preventing model theft and misuse in an era where AI models represent significant commercial and research value.

Syllabus

IFDS Workshop Short Talks–Detecting Model Misuse Through LLM fingerprinting

Taught by

Paul G. Allen School

Reviews

Start your review of Detecting Model Misuse Through LLM Fingerprinting

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.