Courses from 1000+ universities
$7.2 billion in combined revenue since 2020. $8 billion in lost market value. This merger marks the end of an era in online education.
600 Free Google Certifications
Machine Learning
Python
Microsoft Excel
Intelligenza Artificiale
Python for Data Science
Introduction to Philosophy
Organize and share your learning with Class Central Lists.
View our Lists Showcase
Explore vLLM's efficiency for LLM deployment and Neural Magic's enterprise solutions for cost-effective, scalable AI model services.
Optimize YOLOv8 models for 10x smaller, 8x faster performance on CPUs. Learn sparsification techniques, quantization, and deployment strategies for efficient computer vision applications.
Optimize LLMs with SparseGPT: prune and quantize models for efficient CPU deployment at GPU speeds. Explore one-shot compression, benchmarks, and practical implementation.
Adapt advanced pruning and quantization methods for ML models, achieving 4X speedup with 99% accuracy in minutes, using Computer Vision and NLP examples.
Explore second-order pruning algorithms for model compression, achieving higher sparsity while maintaining accuracy. Learn to apply these techniques to your ML projects for improved efficiency.
Explore AC/DC, a novel sparse training algorithm for DNNs. Learn its benefits, implementation, and deployment for improved performance and accuracy in deep learning.
Explore sparse models' transfer performance and efficiency gains in image and language tasks, outperforming dense models even at high sparsities.
Explore the evolving landscape of LLM compression techniques, from cutting-edge research to practical implementation, examining quantization and sparsity tradeoffs for optimizing generative AI systems in production environments.
Explore SDG Hub, an open-source toolkit for customizing language models with synthetic data, covering its components, strategies for teacher model selection, and real-world applications through demonstrations and examples.
Explore continual learning for LLMs through a practical method that enables fine-tuning without compromising existing capabilities, using low-rank subspace constraints to preserve knowledge while adapting to new tasks.
Explore a novel combinatorial approach to neural network interpretability through the Feature Channel Coding Hypothesis, revealing how networks compute Boolean expressions and the natural limitations of code interference.
Get personalized course recommendations, track subjects and courses with reminders, and more.