Courses from 1000+ universities
Buried in Coursera’s 300-page prospectus: two failed merger attempts, competing bidders, a rogue shareholder, and a combined market cap that shrank from $3.8 billion to $1.7 billion.
600 Free Google Certifications
Management & Leadership
Data Analysis
Digital Marketing
Introduction to Graphic Illustration
Unlocking Information Security I: From Cryptography to Buffer Overflows
Quantum Mechanics for Everyone
Organize and share your learning with Class Central Lists.
View our Lists Showcase
Discover how IBM leverages Ray for massive-scale data processing, focusing on the Data Prep Kit's capabilities in AI training and scientific research applications.
Explore cost-effective LLM inference scaling using AWS accelerators, Ray, vLLM, and Anyscale. Learn to build a complete stack on EKS and leverage advanced cluster management for enterprise-grade GenAI workloads.
Learn to scale and productionize GenAI and LLM workloads cost-effectively using AWS compute instances and Anyscale's capabilities for ambitious AI projects.
Learn distributed model training with PyTorch and Ray. Migrate code, scale AI workflows, and optimize performance for large-scale training and fine-tuning on Anyscale.
Explore building and scaling end-to-end LLM workflows, covering data processing, model fine-tuning, evaluations, and production inference with Anyscale's modern platform.
Explore building RAG-based chat assistants using Canopy and Anyscale Endpoints. Learn architecture, see live examples, and get started with your own project using flexible frameworks and serverless LLM APIs.
Explore Ray Train's architecture for efficient, cost-effective distributed deep learning. Learn about resource scheduling, API simplicity, and exclusive features for LLM training, including Distributed Checkpointing.
Explore Ray scheduling features to optimize AI applications, enhancing performance and cost-efficiency. Learn placement groups, graceful node draining, and label-based scheduling for faster, cheaper operations.
Explore Ray's scalability improvements post-2.0, including health checks and resource broadcasting. Learn to develop scalable code for large-scale ML workloads and understand challenges in building a 4000-node cluster.
Learn to scale probabilistic time-series forecasting using Ray for financial markets. Explore techniques for handling non-stationarity and improving model robustness through back-testing and distributed computing.
Learn to develop, evaluate, and scale RAG-based LLM applications for production, including advanced topics like hybrid routing to bridge the gap between open-source and closed LLMs.
Explore Lockheed Martin's AI Factory ecosystem for training, deploying, and sustaining AI solutions. Learn how Ray enhances ML workloads, enables distributed computing, and integrates with various tools to accelerate AI development.
Explore efficient deployment of multiple models using Ray Serve's features: model composition, multi-application, and model multiplexing. Learn industry patterns and case studies for optimizing resource utilization.
Explore Amazon's exabyte-scale migration from Spark to Ray, covering challenges, strategies, and future vision for integrating Ray into critical data pipelines.
Explore DoorDash's journey in modernizing their model serving platform using Ray Serve, focusing on flexibility and self-service for diverse ML applications and frameworks.
Get personalized course recommendations, track subjects and courses with reminders, and more.