Multi-Modal Transformer Agents Controlled by StarCoder - Building AI Systems Without LangChain
Discover AI via YouTube
Google Data Analytics, IBM AI & Meta Marketing — All in One Subscription
The Fastest Way to Become a Backend Developer Online
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn to implement and control multi-modal transformer agents through a 21-minute tutorial that demonstrates how to connect various transformer applications on HuggingFace Hub using StarCoder as the central intelligence. Explore the integration of different transformer modalities - from audio to visual and written content - while using StarCoder, OpenAssistant, or OpenAI's text-davinci-003 model as the main switching intelligence. Follow along with real-time coding in a COLAB notebook to understand how to implement transformer agents, set up prompt templates, and create ordered execution flows. Master the use of Gradio tools to extend the multimodal toolbox capabilities and discover how transformer agents link different HuggingFace transformers according to specific tasks. Gain practical insights into this innovative approach that combines individual transformers with a controlling AI/LLM, representing a significant advancement in transformer technology implementation.
Syllabus
Multi-Modal Transformer AGENTS, controlled by StarCoder (W/o LangChain)
Taught by
Discover AI