This course walks learners through the foundation of the LLM prediction game: how the game works, how prompts are structured, how they are generated for each day, and how LLM responses are used and tokenized. By the end of this course, you’ll have a self-contained daily dataset powering the game logic.
Overview
Syllabus
- Unit 1: Explaining the LLM Prediction Game Idea
- Unit 2: Generating Prompt Data for the LLM Prediction Game
- Expanding the Prompt Data Variety
- Creating Complete Question Sets
- Generating a Year of Daily Prompts
- Saving Game Data for Future Use
- Unit 3: Selecting the Daily Prompt for the LLM Prediction Game
- Loading Prompts for Daily Challenges
- Getting the Day of Year
- Validating Prompt Data for Daily Challenges
- Selecting the Perfect Daily Prompt
- Testing Your Daily Prompt System
- Unit 4: Getting and Processing LLM Completions for the Game
- Setting Up the OpenAI Client
- Getting Raw Text from LLM
- Adding Error Handling to LLM Calls
- Splitting Text into Game Tokens
- Processing LLM Responses into Words
- Testing Your LLM Integration