Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a comprehensive lecture on improving neural network reliability under distribution shift, focusing on OpenAI's CLIP model for image-text data learning. Delve into a large-scale experimental study comparing over 200 models and test conditions, highlighting CLIP's exceptional robustness. Discover new methods for reliable model fine-tuning through weight interpolation. Investigate the source of CLIP's robustness, revealing the crucial role of pre-training datasets over language supervision. Gain insights into ongoing efforts to enhance pre-training datasets, including the LAION-5B project and DataComp experiments aimed at increasing dataset-induced robustness.