Practical Machine Learning: Foundations to Neural Networks
Dartmouth College via Coursera Specialization
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
You will develop the ability to rigorously formulate learning tasks using probability and statistics, distinguish Bayesian and frequentist perspectives, build linear models for regression and classification, estimate optimal model parameters via Maximum Likelihood Estimation (MLE), and apply neural networks to practical problems. The series progresses from foundational methods to real-world neural network implementation.
By the end of this specialization, learners will be able to:
Express learning tasks with mathematical rigor using ideas from probability and statistics. Deconstruct Bayesian and frequentist perspectives and utilize these perspectives to approach machine learning tasks with well-reasoned strategies. Apply maximum likelihood estimate (MLE) to find optimal parameters of a model. Build linear models for regression and for classification. Design and implement artificial neural networks tailored to the needs of particular regression and classification tasks.Apply the theory of neural networks to building models.
Syllabus
- Course 1: Foundations for Machine Learning
- Course 2: Machine Learning Fundamentals
- Course 3: Machine Learning with Neural Networks
Courses
-
This course provides a practical and theoretical tour of the most essential probability distributions that are most often used for modern machine learning and data science. We will explore the fundamental building blocks for modeling discrete events (Bernoulli, binomial, multinomial distributions) and continuous quantities (Gaussian distribution) and discuss the implications of Bayes Theorem. Moreover, we will discuss two perspectives in estimating the model parameters, namely Bayesian perspective and frequentist perspective and learn how to reason about uncertainty in model parameters themselves using the powerful beta and Dirichlet distributions for Bayesian perspective and maximum likelihood estimate for frequentist perspective. By the end of this course, you will have a fluent command of the mathematical "language" needed to understand, build, and interpret probabilistic models.
-
This course provides a brief introduction to the theory and practice of supervised machine learning, the discipline of teaching computers to make predictions from labeled data. We begin with a well-known model of linear regression, moving from fundamental principles to the advanced regularization techniques essential for building robust models. We then transition from regression to classification, exploring two major paradigms for separating data: discriminative models and generative models. The course concludes in learning how to critically evaluate and compare classifier performance using industry-standard tools such as the ROC Curve. Upon completion, you will have a strong command of the core principles that underpin modern predictive modeling.
-
This course explores the principles of machine learning through the lens of one of its most powerful and versatile model classes: the artificial neural network. We will cover the fundamental machine learning concepts of modeling, training, and generalization. You will learn how to process the input data with feed-forward operations, how to train a neural network model using gradient-based optimization and the backpropagation algorithm, and how to ensure it performs well on new data using regularization. In the final module, we discuss Bayesian neural networks, learning how to build models that not only make predictions but also quantify their own uncertainty.
Taught by
Peter Chin