CPU Inferencing of Language Models in Teradata
MLOps World: Machine Learning in Production via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to implement and run pre-trained Hugging Face language models directly within Teradata's Vantage platform using CPU-only inference in this comprehensive workshop. Discover Teradata's massively parallel processing capabilities and how the platform's shared-nothing, linearly scalable architecture enables GPU-like inference performance through parallel CPU processing. Explore the installation process for embedding and sequence-to-sequence language models within the database environment, and understand how to integrate these AI capabilities into existing ETL workflows and business intelligence dashboards with built-in workload management. Gain insights into practical applications and novel use cases for in-database language model processing, including integration strategies for customer experience applications. Master the techniques for leveraging Teradata's SQL and Python/R interfaces to access and utilize language models within enterprise-scale data processing environments, enabling seamless AI integration without requiring separate GPU infrastructure.
Syllabus
CPU Inferencing of Language Models in Teradata
Taught by
MLOps World: Machine Learning in Production