Courses from 1000+ universities
Buried in Coursera’s 300-page prospectus: two failed merger attempts, competing bidders, a rogue shareholder, and a combined market cap that shrank from $3.8 billion to $1.7 billion.
600 Free Google Certifications
Management & Leadership
Data Analysis
Digital Marketing
Introduction to Graphic Illustration
Unlocking Information Security I: From Cryptography to Buffer Overflows
Quantum Mechanics for Everyone
Organize and share your learning with Class Central Lists.
View our Lists Showcase
Dive into RWKV-7 "Goose" with author Eugene Cheah, exploring its linear inference capabilities, architecture innovations, benchmarks, and the World Tokenizer, with practical tips for running and fine-tuning this RNN-based model.
Delve into the technical aspects of LLaVa-CoT's vision language models, exploring data generation, training methodologies, and inference-time scaling for enhanced reasoning capabilities.
Discover how to fine-tune AI models for intelligent code completion, building Cursor-like "tab tab" functionality with practical examples and model comparisons.
Delve into the technical details of OpenCoder's development, from data preprocessing and deduplication to training methodology and evaluation metrics for building effective code LLMs.
Delve into the mechanics of DeepSeek R1's reinforcement learning, exploring GRPO's memory optimization, group advantages, and practical applications in AI model training.
Explore RAGAS framework's key evaluation criteria for Retrieval Augmented Generation (RAG) systems, covering faithfulness, answer relevance, and context relevance metrics for improved AI performance.
Master FLUX-dev fine-tuning with LoRA techniques, data requirements, hardware setup, and code walkthrough. Compare results against PixArt models for optimal AI image generation.
Master fine-tuning diffusion transformers to create high-quality AI images using synthetic data generation and advanced training techniques for professional results.
Discover how to fine-tune Qwen3.0-6B for Text2SQL tasks, outperforming GPT-4o through dataset preparation, model evaluation, and optimization techniques in this detailed walkthrough.
Dive into the technical architecture and development process of DeepSeek's R1 model, exploring GRPO implementation, reasoning capabilities, and model optimization techniques through detailed examples.
Explore the architecture and training process of Phi-4, a small multimodal model, including its mixture of LoRAs approach, vision and audio capabilities, and practical applications.
Explore systematic prompting techniques through an in-depth analysis of template structures, zero-shot methods, emotion prompting, and thought generation approaches for enhanced AI interactions.
Dive into the technical architecture and methodology behind Flux, exploring rectified flow transformers, latent diffusion models, and the innovative approaches that led to superior image generation results.
Dive into Meta's groundbreaking Llama 3 architecture, exploring pre-training techniques, model capabilities, synthetic data quality, and implementation strategies for advanced AI development.
Dive into the mechanics of Samba, a hybrid state space model built on Mamba that enables efficient unlimited context language modeling for advanced AI applications.
Get personalized course recommendations, track subjects and courses with reminders, and more.