Overview
Syllabus
- Introduction to using multiple APIs through a single endpoint
- Explanation of how different APIs have different formats
- Introduction to LiteLLM as open source solution
- Overview of testing different use cases
- Basic setup instructions for LiteLLM
- Demo of basic API calls
- Discussion of LiteLLM's business model
- Testing long context with Berkshire transcript
- Demonstration of streaming functionality
- Testing tool calling capabilities
- Testing o1 series models
- Testing image input functionality
- Introduction to custom endpoints
- Setting up RunPod custom endpoint
- Demonstration of local endpoint with LM Studio
- Conclusion and recommendations
- Comparison with alternatives like Langchain and DSPy
- Reference to Advanced Inference Repo
Taught by
Trelis Research