Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

GLIDE- Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models

Yannic Kilcher via YouTube

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the groundbreaking GLIDE model for text-to-image generation in this comprehensive video lecture. Delve into the mechanics of diffusion models and their application in creating photorealistic images from text descriptions. Learn about conditional generation techniques, guided diffusion, and the architecture behind GLIDE. Examine training methodologies, result metrics, and potential failure cases. Gain insights into safety considerations surrounding this powerful technology. Discover how GLIDE compares to other models like DALL-E and understand its implications for text-driven image editing and inpainting.

Syllabus

- Intro & Overview
- What is a Diffusion Model?
- Conditional Generation and Guided Diffusion
- Architecture Recap
- Training & Result metrics
- Failure cases & my own results
- Safety considerations

Taught by

Yannic Kilcher

Reviews

4.0 rating, based on 1 Class Central review

Start your review of GLIDE- Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models

  • Tochi Clement
    Can now understand photorealistic image generation and editing with text-guide diffusion model, which what i needed to go for a contract on AI image generation remote.

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.