Python, Prompt Engineering, Data Science — Build the Skills Employers Want Now
Learn Excel & Financial Modeling the Way Finance Teams Actually Use Them
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore a comprehensive video analysis of the paper "Generative Pretraining from Pixels" by OpenAI researchers. Delve into the application of generative model principles from natural language processing to image processing. Learn about the innovative approach of using a sequence Transformer to predict pixels auto-regressively, without relying on 2D input structure knowledge. Discover how this method, trained on low-resolution ImageNet data without labels, achieves remarkable results in image representation learning. Examine the model's performance in linear probing, fine-tuning, and low-data classification tasks, including its competitive accuracy on CIFAR-10 and ImageNet benchmarks. Follow the detailed breakdown of the model architecture, experimental results, and their implications for the field of computer vision and unsupervised learning.
Syllabus
- Intro & Overview
- Generative Models for Pretraining
- Pretraining for Visual Tasks
- Model Architecture
- Linear Probe Experiments
- Fine-Tuning Experiments
- Conclusion & Comments
Taught by
Yannic Kilcher