Artificial intelligence and machine learning now sit at the center of modern data science—powering tools and systems that detect patterns, learn from experience, and make high-stakes predictions. At the heart of many of these advances are neural networks: flexible models that learn layered representations from data. To work effectively with them, it’s not enough to recognize the terminology—you need to understand the principles and decisions that shape how these models are built, trained, and evaluated.
In this course, you’ll build that foundation in deep learning with an applied approach designed for Python-savvy data and technical professionals. You’ll learn about how neural networks are structured, how they learn through optimization, and how core design choices—such as architecture, regularization, and learning rate—directly influence performance. The emphasis is on developing both practical skill and clear intuition, so you can move from “running models” to making informed modeling decisions.
The course also introduces two foundational ideas that power today’s most effective workflows: transfer learning and self-supervised learning. You’ll explore how pre-trained models can be adapted to new tasks, and how autoencoders can learn meaningful representations from unlabeled data—connecting fundamental neural network concepts to the approaches behind many modern AI applications.
Through hands-on examples, you’ll build and train neural networks from scratch and apply them to supervised and unsupervised learning problems. Along the way, you’ll sharpen your ability to diagnose model behavior, assess data quality, and understand when and why neural networks generalize—or struggle—so you can apply these methods with confidence in real-world analytical and research settings.
Learners should have prior experience with Python programming, basic machine learning concepts, and introductory statistics.