Courses from 1000+ universities
$7.2 billion in combined revenue since 2020. $8 billion in lost market value. This merger marks the end of an era in online education.
600 Free Google Certifications
Machine Learning
Python
Microsoft Excel
Intelligenza Artificiale
Python for Data Science
Introduction to Philosophy
Organize and share your learning with Class Central Lists.
View our Lists Showcase
Explore ReFT, a novel approach to fine-tuning language models by modifying internal representations, achieving efficiency with fewer parameters than traditional methods.
Explore LayerSkip, an LLM acceleration method that speeds up inference by strategically restricting model layers, achieving 2x speed-ups on various tasks through innovative techniques.
Explore DSPy, a programming model for LM pipelines that enables self-improving AI systems through declarative modules and computational graphs. Learn its potential to enhance AI performance.
Explore knowledge distillation techniques for large language models, focusing on reverse KLD to improve student model precision and response quality.
Learn to build an interactive ChatBot using Unify, exploring Synchronous and Asynchronous clients and integrating with various LLMs for dynamic AI conversations.
Explore groundbreaking research on BitNet b1.58, a ternary parameter model matching full-precision Transformers while offering improved cost-effectiveness and defining new LLM scaling laws.
Explore SparQ Attention, a technique for increasing LLM inference throughput by reducing memory bandwidth in attention blocks through selective history fetching, applicable to off-the-shelf models without retraining.
Explore OpenMoE, an open-source Mixture-of-Experts language model series. Discover its cost-effectiveness, routing mechanisms, and key insights in comparison to dense LLMs.
Explore Teacher-Student architectures in knowledge distillation for AI model compression, expansion, adaptation, and enhancement. Gain insights into cutting-edge research and practical applications.
Explore SliceGPT's innovative approach to compressing large language models by deleting rows and columns, maintaining high performance while reducing parameters significantly.
Explore Contrastive Preference Optimization, a novel approach to enhance LLM performance in translation tasks by guiding models towards producing superior translations.
Explore OpenAI's Sora and Google's Lumiere text-to-video models. Compare architectures, capabilities, and emerging simulation features for diverse, coherent video generation from text prompts.
Explore hardware accelerators enhancing LLM performance and efficiency, covering GPUs, FPGAs, and specialized designs in this comprehensive survey presentation.
Explore Distill-Whisper: a compact, efficient speech recognition model. Learn about its robust knowledge distillation, large-scale pseudo labelling, and performance compared to the larger Whisper model.
Explore essential sparsity in large pre-trained models, examining efficient handling of complex transformers and the impact of weight removal on performance.
Get personalized course recommendations, track subjects and courses with reminders, and more.