Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

IBM

Introduction to Neural Networks and PyTorch

IBM via Coursera

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Get ready to build the foundational PyTorch skills you need to launch your career as an AI Engineer – the fastest growing job title in the United States. Starting with tensors, this course takes you right through to fully trained classification models. You will master tensor operations, build custom datasets, and implement linear regression models using PyTorch's nn.Module and autograd system. Then, you will progress through gradient descent, stochastic and mini-batch training, loss functions, and training/validation workflows. Further, you will build logistic regression classifiers, apply cross-entropy loss, and implement advanced optimization and regularization techniques. Through interactive labs, instructional videos, and an AI-assisted dialogue, you will practice building, training, and evaluating models using real PyTorch code patterns. By the end, you will create a portfolio-worthy project that demonstrates your ability to perform PyTorch classification and gradient-based optimization tasks. Enroll now to enhance your resume and complete a project that showcases your hands-on skills in the AI-driven job market.

Syllabus

  • Exploring Tensors
    • In this module, you'll build your foundation in PyTorch by working directly with tensors. You'll explore one- and two-dimensional tensors, common tensor operations, and attributes like shape, dtype, and numel(). You'll also examine basic differentiation concepts and see how PyTorch's autograd system tracks and computes gradients. Through guided practice, you'll learn how to connect linear algebra concepts to real PyTorch code.
  • Building Datasets in PyTorch
    • In this module, you'll learn how to structure and prepare data for training in PyTorch. You'll create custom dataset classes, implement __len__ and __getitem__, and apply preprocessing steps using transforms and Compose. You'll also work with image datasets and Torchvision patterns. By the end, you'll understand how data flows into a PyTorch model during training.
  • Applying Linear Regression and Gradient Descent
    • In this module, you'll learn how to build and train linear regression models in PyTorch. You'll explore how models are defined using nn.Module, how state_dict() stores parameters, and how loss functions measure prediction error. You'll examine cost surfaces, gradient descent, learning rates, and stopping criteria. Through hands-on training loops, you'll see how slope and bias update over time as the model minimizes loss.
  • Training Linear Regression Models the PyTorch Way
    • In this module, you'll discover how to implement training workflows using PyTorch tools such as DataLoader and optimizers. You'll learn how to compare batch, stochastic, and mini-batch gradient descent, and examine how batch size, epochs, and learning rate affect convergence. You'll learn how to structure full training loops with forward passes, backpropagation, and parameter updates. Finally, you'll explore training, validation, and test splits to evaluate model performance and detect overfitting.
  • Extending Linear Regression to Multiple Inputs and Outputs
    • In this module, you'll explore how to extend linear regression to handle multiple input features and multiple outputs. You'll learn how to use nn.Linear and custom modules to build higher-dimensional models and discover how weights and bias expand from scalars to vectors and matrices. You'll practice working with vectorized cost functions, gradient descent, and training workflows using DataLoaders and optimizers. Through hands-on labs, you'll learn how to build, train, and evaluate multi-dimensional and multi-output regression models step by step using real PyTorch code patterns.
  • Applying Logistic Regression for Classification
    • In this module, you'll explore how to move from regression to classification. You'll learn how to build logistic regression models using nn.Sequential, apply the sigmoid function to generate probabilities, and convert probabilities into class predictions. You'll examine the Bernoulli distribution and maximum likelihood estimation and discover why cross-entropy loss is preferred over Mean Squared Error (MSE) for classification tasks. You'll also explore optimization and regularization techniques that help improve classification performance.
  • Final Project, Final Quiz, and Course Wrap-Up
    • In this module, you'll apply what you've explored throughout the course in a hands-on classification project. You will build a logistic regression model to predict the outcomes of League of Legends matches. Leveraging various in-game statistics, this project will utilize your knowledge of PyTorch, logistic regression, and data handling to create a robust predictive model. Finally, you can choose between immediate auto-grading using the IBM AI-assisted assessment tool, Mark, or submit your assignment for a human peer review.

Taught by

Joseph Santarcangelo

Reviews

4.4 rating at Coursera based on 1901 ratings

Start your review of Introduction to Neural Networks and PyTorch

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.