PerturBench - Benchmarking Machine Learning Models for Cellular Perturbation Analysis
Valence Labs via YouTube
Learn Python with Generative AI - Self Paced Online
Learn Excel & Financial Modeling the Way Finance Teams Actually Use Them
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore a comprehensive benchmarking framework for machine learning models in cellular perturbation analysis through this 56-minute research presentation. Discover how PerturBench standardizes evaluation methods in the rapidly evolving field of perturbation response modeling for single cells, featuring a modular platform for model development, diverse perturbational datasets, and specialized metrics for fair model comparison. Learn about the extensive evaluation results that reveal limitations in widely used models including mode collapse issues, and understand why rank metrics are crucial alongside traditional measures like RMSE for validating model effectiveness. Examine findings showing that simpler architectures often compete well with complex models and scale effectively with larger datasets, while no single architecture demonstrates clear superiority across all scenarios. Gain insights into how this benchmarking approach establishes new evaluation standards, supports robust model development, and advances the potential for high-throughput genetic and chemical screens in disease target discovery.
Syllabus
PerturBench: Benchmarking Machine Learning Models for Cellular Perturbation Analysis
Taught by
Valence Labs