AI Engineer - Learn how to integrate AI into software applications
Get 50% Off Udacity Nanodegrees — Code CC50
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to deploy an AI chatbot with real-time inference using Kubernetes and GPUs in this 11-minute tutorial. Follow along as Mike Ellison, Senior Developer Advocate for Akamai, demonstrates building and deploying an LLM-powered chatbot using Akamai Cloud and the Linode Kubernetes Engine (LKE), requiring no prior cloud expertise. Discover how to use App Platform for deploying full-stack AI applications, leverage LKE with NVIDIA GPUs for inference, and integrate with Hugging Face and Llama 3. Explore the process of managing GPUs and configuring serverless workloads with KServe, Hugging Face, and Knative, while simplifying DNS, SSL, and CI/CD with one-click infrastructure solutions. Gain insights from real-world experience showing how to get an AI chatbot running in just over an hour, making advanced AI deployment accessible to developers at any level.
Syllabus
Build a Chatbot with Kubernetes & Akamai Cloud
Taught by
Linode