Domain Adaptation with Invariant Representation Learning - What Transformations to Learn?
Stanford University via YouTube
Google, IBM & Microsoft Certificates — All in One Plan
Live Online Classes in Design, Coding & AI — Small Classes, Free Retakes
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore domain adaptation techniques for invariant representation learning in this Stanford University lecture. Delve into the challenges of unsupervised domain adaptation and learn why fixed mappings across domains may be insufficient. Discover an efficient method that incorporates domain-specific information to generate optimal representations for classification. Examine the importance of minimal changes in causal mechanisms across domains and how this approach preserves valuable information. Follow along as the speaker presents synthetic and real-world data experiments demonstrating the effectiveness of the proposed technique. Gain insights into transfer learning, causal discovery, and their applications in computational biology and cancer research.
Syllabus
Introduction
Motivation
Why dont they work
Conditional Target Shift
Neural Network Setup
Minimize Jenkins Shannon Divergence
adversarial training
translation
optimization
Contrastive training
Simulation
Datasets
Results
Future work
Taught by
Stanford MedAI