Overview
Syllabus
Lecture 26: Neural networks (continued)
Lecture 25: Neural networks (continued)
Lecture 24b: Neural networks
Lecture 24a: Loss minimization (revisited)
Lecture 23b: Logistic regression
Lecture 23a: Bayesian learning (continued)
Lecture 22b: Introduction to Bayesian learning
Lecture 22a: Learning as loss minimization
Lecture 21: Stochastic Gradient Descent for SVM
Lecture 20: Practical machine learning tutorial
Lecture 19: SVMs (continued)
Lecture 18a: Boosting and Ensembles (continued)
Lecture 18b: Support vector machines
Lecture 17: Boosting
Lecture 16: VC dimensions (continued)
Lecture 15: VC dimension
Lecture 14: Agnostic learning
Lecture 13: Learnability Results for Consistent Learners
Lecture 12: Occam's Razor for a Consistent Learner
Lecture 11: Computational Learning Theory
Lecture 10: Least Mean Squares Regression
Lecture 9: Perceptron (continued)
Lecture 8b: The Perceptron Algorithm
Lecture 8a: Mistake bound learning (continued)
Lecture 7: The mistake bound model
Lecture 6b: Quantifying learning algorithms
Lectures 6a: Linear models expressiveness
Lecture 5b: Linear Models
Lecture 5a: Overfitting
Lecture 4: Decision trees (continued)
Lecture 3: Decision trees
Lecture 2: Supervised Learning - The setup
Lecture 1: What is Machine Learning?
Lecture 27: Practical advice for using machine learning
Taught by
UofU Data Science