Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Northeastern University

Generative AI Part 1

Northeastern University via Coursera

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Introduces the theoretical foundations and advanced concepts of neural networks, generative models, transformers, and large language models. Students will explore how these AI systems create new data, process information, and learn through feedback, while analyzing their applications across various fields. The course emphasizes key principles in model building, optimization, and real-world generative AI use cases.

Syllabus

  • Foundations of Neural Networks and Optimization
    • In this module, you will explore the foundations of neural networks, including perceptrons, architectures, and learning algorithms. You will dive deeply into optimization methods critical for efficient training, focusing on advanced techniques like Newton’s and quasi-Newton methods, momentum, RMSProp, and Adam optimization algorithms.
  • Regularization and Generalization Techniques
    • This module guides you through the mathematical approaches to regularization techniques that enhance neural network generalization and prevent overfitting. You will analyze concepts including Stein’s unbiased risk estimator, eigen decomposition, ensemble methods, dropout mechanisms, and advanced normalization techniques such as batch normalization.
  • Convolutional Neural Networks
    • In this module, you will examine convolutional neural networks (CNNs), including convolution operations, parameter sharing, kernel methods, and multi-dimensional data structures. You'll explore advanced CNN architectures, regularization, normalization techniques, and the implications of random kernels on network learning behavior.
  • Generative Models and Maximum Likelihood Estimation
    • In this module, you will analyze the maths underpinning generative models and maximum likelihood estimation (MLE). You will explore divergence metrics such as Kullback-Leibler divergence, Bayesian network structures, and autoregressive modeling methods, focusing on their theoretical foundations and practical implications.
  • Recurrent Neural Networks
    • In this module, you will rigorously examine the foundations and implementation details of Recurrent Neural Networks (RNNs) for modeling sequential data. You will study the structure, dynamics, training procedures, and limitations of standard RNNs, explore gated architectures like LSTM and GRU mathematically, and extend these models with bidirectional and multilayer approaches.
  • Sequence-to-Sequence Models and Attention Mechanism
    • You will explore techniques essential to sequence-to-sequence modeling, with special emphasis on attention mechanisms. The module will guide you through the motivations behind attention, how attention weights are calculated, and how attention significantly improves sequence models in practical tasks.
  • Transformer Architecture
    • This module offers a deep investigation into Transformer architectures, focusing on self-attention mechanisms, positional encodings, multi-head attention, and various Transformer configurations. You will analyze how Transformers structurally differ from RNNs, and mathematically explore their capabilities and limitations.

Taught by

Ramin Mohammadi

Reviews

Start your review of Generative AI Part 1

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.