Build foundational skills in deep learning by designing and training neural networks to solve complex real-world problems. You’ll begin with the essentials of neural networks, advancing to specialized architectures like Convolutional and Recurrent Neural Networks, along with Transformers, Generative Adversarial Networks and Diffusion Models. Through projects, create models for applications such as image classification, Q&A, and CAPTCHA image generation, gaining hands-on experience with PyTorch and advanced training techniques. Ideal for those aiming to harness the potential of deep learning, this experience prepares you to tackle AI challenges across various domains.
Overview
Syllabus
- Constructing and Training Neural Networks
- This course covers foundational deep learning theory and practice. We begin with how to think about deep learning and when it is the right tool to use. The course covers the fundamental algorithms of deep learning, deep learning architecture and goals, and interweaves the theory with implementation in PyTorch.
- Building Convolutional Neural Networks for Computer Vision
- This course introduces Convolutional Neural Networks, the most widely used type of neural networks specialized in image processing. You will learn the main characteristics of CNNs that make them so useful for image processing, their inner workings, and how to build them from scratch to complete image classification tasks. You will learn what are the most successful CNN architectures, and what are their main characteristics. You will apply these architectures to custom datasets using transfer learning. You will also learn about autoencoders, a very important architecture at the basis of many modern CNNs, and how to use them for anomaly detection as well as image denoising. Finally, you will learn how to use CNNs for object detection and semantic segmentation.
- Creating Sequence Models and Transformers
- This course covers the fundamentals and applications of sequence modeling. The course begins with an overview of sequence models and their significance, followed by hands-on lessons to tokenize text and develop embeddings using PyTorch. Participants will explore recurrent neural networks (RNNs) and their variants, including LSTMs and GRUs, progressing to Seq2Seq models and the implementation of attention mechanisms. The course culminates in a comprehensive understanding of transformers, self-attention, and industry evaluation practices. By the end, students will build a transformer-based Q&A system, solidifying their grasp of modern NLP frameworks.
- Building Generative Models
- This course covers the construction and training of Generative Adversarial Networks (GANs), providing a comprehensive understanding of generative models. Starting with foundational concepts of latent spaces and data distributions, learners will progress to implementing generator and discriminator networks using PyTorch. The curriculum emphasizes step-by-step training processes, improvements in GAN architecture, and the exploration of Deep Convolutional GANs. Additionally, the course presents conditional image generation and introduces diffusion models, highlighting comparisons with GANs. Practical applications culminate in a hands-on project focused on creating synthetic handwriting for CAPTCHA systems, reinforcing learned concepts.
Taught by
Samantha Guerriero, Antje Muntzinger, Sohbet Dovranov and Temi Afeye