Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to deploy high-performance large language models using TensorRT-LLM in this comprehensive workshop recorded at the AI Engineer World's Fair. Master the complete pipeline from selecting and optimizing models to building TensorRT-LLM engines, configuring batch sizes and sequence lengths, and deploying on cloud GPUs. Discover how to overcome the steep learning curve of TensorRT-LLM while achieving significant performance gains in production environments. Gain practical insights from industry experts Philip Kiely and Pankaj Gupta, who share their real-world experience running TensorRT and TensorRT-LLM in production systems, including both the incredible performance benefits and common implementation challenges you'll encounter when getting started with this powerful model serving framework.
Syllabus
From model weights to API endpoint with TensorRT LLM: Philip Kiely and Pankaj Gupta
Taught by
AI Engineer