Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Northeastern University

Generative AI Part 2

Northeastern University via Coursera

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Introduces the theoretical foundations and advanced concepts of neural networks, generative models, transformers, and large language models. Students will explore how these AI systems create new data, process information, and learn through feedback, while analyzing their applications across various fields. The course emphasizes key principles in model building, optimization, and real-world generative AI use cases.

Syllabus

  • Transformer-Based Language Models and Pre-Training
    • In this module, you will explore Transformer-based models in natural language processing. You will study pretraining approaches such as BERT and GPT, the mathematics of pretraining word embeddings, and various optimization and scaling strategies critical to effective language modeling.
  • Variational Autoencoders and Deep Latent Variable Models
    • This module investigates deep latent variable models, focusing on variational autoencoders (VAEs) and related probabilistic methods. You will analyze the mathematics behind sampling strategies, evidence lower bound (ELBO), variational inference, reparameterization tricks, and amortized inference, developing an advanced toolkit for probabilistic generative modeling.
  • Normalizing Flows
    • In this module, you'll explore normalizing flows as precise tools for modeling complex probability distributions through invertible neural networks. You’ll examine the underpinnings, including determinants, geometry, invertibility constraints, and specific flow architectures like Real-NVP and autoregressive models. You'll also investigate practical applications and synthesis of complex densities using normalizing flows.
  • Generative Adversarial Networks
    • This module provides a deep exploration of Generative Adversarial Networks (GANs), focusing on their formulation as likelihood-free generative models. You'll analyze GAN training dynamics, including optimization challenges, mode collapse, and divergence minimization strategies. The module also covers advanced GAN variants such as f-GAN and Wasserstein GAN (WGAN).
  • Energy-Based Models and Score-Based Models
    • In this module, you will explore energy-based generative models and score-based modeling frameworks from a mathematical and implementation perspective. You'll dive deeply into the details of training via score functions, contrastive divergence, and various forms of score matching including denoising techniques, highlighting their theoretical and practical implications.
  • Diffusion Models
    • You'll delve deeply into diffusion models, understanding them mathematically as stochastic processes and connecting them explicitly to score-based models. The module examines forward and reverse diffusion processes, training objectives, SDEs, predictor-corrector methods, and latent diffusion architectures, providing robust foundations for modern generative modeling.
  • Annealed Importance Sampling and Model Evaluation
    • In this module, you'll study annealed importance sampling (AIS) methods for estimating complex probability distributions with rigorous mathematical treatment. You will mathematically analyze AIS step-by-step processes, intermediate distributions, and normalization constants, applying these techniques effectively to probabilistic models, to wrap up the course. You will also assess the evolution of generative models.

Taught by

Ramin Mohammadi

Reviews

Start your review of Generative AI Part 2

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.