Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Machine Learning with PyTorch and Scikit-Learn

Packt via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This course offers a comprehensive exploration of machine learning and deep learning using PyTorch and Scikit-Learn. It provides clear explanations, visualizations, and practical examples to help learners build and deploy machine learning models. Ideal for Python developers, it covers the latest trends in deep learning, including GANs, reinforcement learning, and NLP with transformers. Packed with clear explanations, visualizations, and working examples, the course covers essential machine learning techniques in depth, along with two cutting-edge machine learning techniques: transformers and graph neural networks. This course is designed for developers and data scientists with a solid understanding of Python basics, calculus, and linear algebra. It is ideal for those looking to create practical machine learning applications using Scikit-Learn and PyTorch, and deepen their knowledge of advanced deep learning techniques. Throughout this course you will learn to: - Develop machine learning models using Scikit-Learn and PyTorch. - Implement neural networks and transformers for various data types. - Apply best practices for model evaluation and tuning. This course is based on material written by an expert author, bringing the depth of a book into a more engaging, interactive format. The core content is delivered through clear, structured text you can read at your own pace, supported by short videos and quizzes to highlight key ideas and test your understanding. By combining the strengths of book learning with interactive assessments, you get the best of both worlds: the depth and clarity of an author’s expertise, plus the flexibility to revisit, practice, and reinforce concepts whenever you need.

Syllabus

  • Giving Computers the Ability to Learn from Data
    • In this section, we explore the foundational concepts of machine learning, focusing on how algorithms can transform data into knowledge. We delve into the practical applications of supervised and unsupervised learning, equipping you with the skills to implement these techniques using Python tools for effective data analysis and prediction.
  • Training Simple Machine Learning Algorithms for Classification
    • In this section, we implement the perceptron algorithm in Python to classify flower species in the Iris dataset, enhancing our understanding of machine learning classification. We also explore adaptive linear neurons to optimize models, using tools like pandas, NumPy, and Matplotlib for data processing and visualization.
  • A Tour of Machine Learning Classifiers Using Scikit-Learn
    • In this section, we explore various machine learning classifiers using scikit-learn's Python API, focusing on their implementation and practical applications. We analyze the strengths and weaknesses of classifiers with both linear and nonlinear decision boundaries to enhance our understanding of solving real-world classification problems efficiently.
  • Building Good Training Datasets: Data Preprocessing
    • In this section, we focus on data preprocessing techniques using pandas 2.x to enhance machine learning model performance. We address missing data handling and feature selection to optimize model accuracy and efficiency.
  • Compressing Data Via Dimensionality Reduction
    • In this section, we explore dimensionality reduction techniques such as PCA and LDA to simplify large datasets while preserving essential information. We also examine t-SNE for effective data visualization, enhancing our ability to manage and interpret complex data efficiently.
  • Learning Best Practices For Model Evaluation And Hyperparameter Tuning
    • In this section, we explore best practices for evaluating and refining machine learning models, focusing on techniques like K-Fold Cross-Validation and hyperparameter tuning to enhance model performance. We also diagnose bias and variance issues using learning curves, ensuring models are both accurate and reliable in real-world applications.
  • Combining Different Models For Ensemble Learning
    • In this section, we explore ensemble learning techniques by implementing majority voting, bagging, and boosting to enhance model accuracy and robustness. We focus on practical applications, such as reducing overfitting and improving weak learner performance, to build more reliable predictive models.
  • Applying Machine Learning to Sentiment Analysis
    • In this section, we apply machine learning to sentiment analysis by preparing IMDb movie review data, transforming text into feature vectors, and training a logistic regression model for classification. We also explore out-of-core learning techniques to handle large datasets efficiently, enhancing our ability to derive insights from extensive text data collections.
  • Predicting Continuous Target Variables With Regression Analysis
    • In this section, we explore regression analysis to predict continuous target variables, focusing on implementing linear regression with scikit-learn and designing robust models to handle outliers. We also analyze nonlinear data using polynomial regression, enhancing our ability to interpret complex data patterns and make informed predictions in scientific and industrial contexts.
  • Working with Unlabeled Data - Clustering Analysis
    • In this section, we explore clustering analysis to organize unlabeled data into meaningful groups using unsupervised learning techniques. We implement k-means clustering with scikit-learn, design hierarchical clustering trees, and analyze data density with DBSCAN to enhance data analysis and decision-making processes.
  • Implementing a Multilayer Artificial Neural Network from Scratch
    • In this section, we implement a multilayer neural network from scratch using Python, focusing on the backpropagation algorithm for training. We also evaluate the network's performance on image classification tasks, emphasizing the importance of understanding these foundational concepts for developing advanced deep learning models.
  • Parallelizing Neural Network Training with PyTorch
    • In this section, we delve into how PyTorch enhances neural network training efficiency by utilizing its Dataset and DataLoader for streamlined input pipelines. We also explore the implementation of neural networks using PyTorch's torch.nn module and analyze various activation functions to optimize artificial neural networks.
  • Going Deeper: The Mechanics of PyTorch
    • In this section, we delve into PyTorch's mechanics, focusing on implementing neural networks using the `torch.nn` module and designing custom layers for research projects. We also analyze computation graphs to enhance model building, equipping you with skills to tackle complex machine learning tasks efficiently.
  • Classifying Images with Deep Convolutional Neural Networks
    • In this section, we explore the implementation of convolutional neural networks (CNNs) in PyTorch for image classification tasks, focusing on understanding CNN architectures and enhancing model performance through data augmentation techniques. We also delve into the building blocks of CNNs, including convolution operations and subsampling layers, to equip you with the skills necessary for developing robust image recognition systems.
  • Modeling Sequential Data Using Recurrent Neural Networks
    • In this section, we explore the implementation of recurrent neural networks (RNNs) for sequence modeling in PyTorch, focusing on their application in sentiment analysis and character-level language modeling. We delve into the intricacies of RNNs, including long short-term memory (LSTM) cells, to enhance our understanding of processing sequential data effectively.
  • Transformers Improving Natural Language Processing With Attention Mechanisms
    • In this section, we explore how attention mechanisms enhance NLP by improving RNNs and introducing self-attention in transformer models. We also learn to fine-tune BERT for sentiment analysis using PyTorch, advancing language processing applications.
  • Generative Adversarial Networks for Synthesizing New Data
    • In this section, we explore generative adversarial networks (GANs) and their application in synthesizing new data samples, focusing on implementing a simple GAN to generate handwritten digits. We also analyze the loss functions for the generator and discriminator, and discuss improvements using convolutional techniques to enhance data generation quality.
  • Graph Neural Networks for Capturing Dependencies in Graph Structured Data
    • In this section, we explore the implementation of graph neural networks (GNNs) using PyTorch Geometric, focusing on designing graph convolutions for molecular property prediction. We also analyze how graph data is represented in neural networks to enhance the understanding and application of GNNs in AI tasks such as drug discovery and traffic forecasting.
  • Reinforcement Learning for Decision Making in Complex Environments
    • This chapter introduces reinforcement learning, covering the theory and implementation of algorithms for training agents to make optimal decisions. We explore key concepts like Markov decision processes, Q-learning, and deep Q-learning, with practical examples in Python using OpenAI Gym.

Taught by

Packt - Course Instructors

Reviews

Start your review of Machine Learning with PyTorch and Scikit-Learn

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.