The Fastest Way to Become a Backend Developer Online
PowerBI Data Analyst - Create visualizations and dashboards from scratch
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the comprehensive process of training custom Large Language Models (LLMs) in this 32-minute conference talk by Reza Shabani from Replit. Gain insights into the entire workflow, from data processing to deployment, including the modern LLM stack, data pipelines using Databricks and Hugging Face, preprocessing techniques, tokenizer training, and running training with MosaicML and Weights & Biases. Learn about testing and evaluation methods using HumanEval and Hugging Face, as well as deployment strategies involving FasterTransformer, Triton Server, and Kubernetes. Discover valuable lessons on data-centrism, evaluation, and collaboration, and understand the qualities that make an effective LLM engineer.
Syllabus
Why train your own LLMs?
The Modern LLM Stack
Data Pipelines: Databricks & Hugging Face
Preprocessing
Tokenizer Training
Running Training: MosaicML, Weights & Biases
Testing & Evaluation: HumanEval, Hugging Face
Deployment: FasterTransformer, Triton Server, k8s
Lessons learned: data-centrism, eval, and collaboration
What makes a good LLM engineer?
Taught by
The Full Stack