Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Unlock the power of deep learning to transform visual data into actionable insights. This hands-on course guides you through the foundational and advanced techniques that drive modern computer vision applications—from image classification to generative modeling.
You'll begin with the building blocks of deep learning - understanding how multilayer perceptrons (MLPs) work, and exploring normalization techniques that stabilize and accelerate training. You'll then dive into unsupervised learning with autoencoders and discover the magic behind Generative Adversarial Networks (GANs) that can create realistic images from noise. After, you'll master the architecture that revolutionized computer vision by learning how CNNs extract spatial hierarchies and patterns from images for tasks like object detection and recognition. Finally, you'll explore cutting-edge architectures. ResNet introduces residual learning for deeper networks, while U-Net powers precise image segmentation in medical imaging and beyond.
Whether you're a data scientist, engineer, or AI enthusiast, this course equips you with the skills to build and deploy deep learning models for real-world vision tasks. With practical examples and guided learning, you'll gain both theoretical understanding and hands-on experience.
This course can be taken for academic credit as part of CU Boulder’s MS in Data Science or MS in Computer Science degrees offered on the Coursera platform. These fully accredited graduate degrees offer targeted courses, short 8-week sessions, and pay-as-you-go tuition. Admission is based on performance in three preliminary courses, not academic history. CU degrees on Coursera are ideal for recent graduates or working professionals. Learn more:
MS in Data Science: https://www.coursera.org/degrees/master-of-science-data-science-boulder
MS in Computer Science: https://coursera.org/degrees/ms-computer-science-boulder
Syllabus
- Neural Network, Multi-Layer Perceptron, and Normalization
- Welcome to Deep Learning for Computer Vision, the second course in the Computer Vision specialization. In this first module, you'll be introduced to the principles behind neural networks and their use in visual recognition tasks. You'll begin by learning the basic building blocks—neurons, weights, biases—and progress toward constructing simple multi-layer perceptrons. Then, you'll discover key activation concepts like batch processing and graph-matrix conversions. Finally, you will visualize neural networks with an emphasis on classification tasks.
- Auto Encoder and GAN
- In this module, you’ll explore two powerful architectures in deep learning: autoencoders and generative adversarial networks (GANs). You’ll begin by learning how autoencoders compress and reconstruct data using encoder-decoder structures, and how reconstruction loss is minimized through backpropagation and gradient descent. You’ll then examine the role of loss functions and optimization techniques in training these models. In the second half of the module, you’ll dive into GANs, where a generator and discriminator compete to produce realistic synthetic data. You’ll study how adversarial training works, how binary cross-entropy loss is applied, and how GANs are used to model complex data distributions. By the end of this module, you’ll be able to implement and evaluate both autoencoders and GANs for representation learning and data generation.
- Convolutional Neural Networks
- In this module, you’ll learn how convolutional neural networks extract features from images and perform classification. You’ll begin by building a tiny CNN by hand and in Excel, exploring convolution, max-pooling, and fully connected layers. Then, you’ll scale up to larger CNN architectures and examine how they process data through multiple convolution and pooling stages. You’ll also study how categorical cross-entropy loss and gradients are computed for training. Finally, you’ll walk through backpropagation across all CNN layers to understand how learning occurs.
- ResNet and U-Net
- In this module, you’ll explore two influential deep learning architectures: ResNet and U-Net. You’ll begin by learning how ResNet uses skip connections and residual learning to enable the training of very deep networks, addressing challenges like vanishing and exploding gradients. You’ll examine how residual blocks preserve information and support higher-order logic across layers. Then, you’ll shift to U-Net, a powerful architecture for image segmentation, and study its encoder-decoder structure, skip connections, and upsampling techniques like transposed convolution. By the end of this module, you’ll understand how both architectures enhance learning efficiency and performance in complex vision tasks.
Taught by
Tom Yeh