LLMOps: Fine-Tuning Video Classifier (ViViT) with Custom Data
The Machine Learning Engineer via YouTube
Build the Finance Skills That Lead to Promotions — Not Just Certificates
Google, IBM & Microsoft Certificates — All in One Plan
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how to fine-tune a Video Vision Transformer (ViViT) using your own dataset in this comprehensive 44-minute tutorial. Explore the process of leveraging a pretrained model by Google (google/vivit-b-16x2-kinetics400), initially trained on the Kinetics-400 dataset, and adapt it to classify videos from a different dataset. Gain hands-on experience in implementing LLMOps techniques for machine learning and data science applications. Access the accompanying code repository on GitHub to follow along and enhance your skills in video classification using state-of-the-art transformer models.
Syllabus
LLMOps: Fine Tune Video Classifier (ViViT ) with your own data #machinelearning #datascience
Taught by
The Machine Learning Engineer