This course walks learners through the foundation of the LLM prediction game: how the game works, how prompts are structured, how they are generated for each day, and how LLM responses are used and tokenized. By the end of this course, you’ll have a self-contained daily dataset powering the game logic.
Overview
Syllabus
- Unit 1: Explaining the LLM Prediction Game Idea
- Unit 2: Generating Prompt Data for the LLM Prediction Game
- Expand Breakpoints and Nouns in Prompt Generator
- Generate Questions for Each Noun Using List Comprehension
- 365-Day Prompt Generator
- Save Entries to data.json
- Unit 3: Selecting the Daily Prompt for the LLM Prediction Game
- Implement load_all_prompts Function
- Fix the Daily Prompt Selection
- Validate Prompts Count in get_daily_prompt
- Complete get_daily_prompt Function
- Test Script for Prompt Selection System
- Unit 4: Getting and Processing LLM Completions for the Game
- Initialize the OpenAI Client
- Implement get_llm_response_words Function
- Add Error Handling to LLM Calls
- Implement _split_into_words Function
- Refactor get_llm_response_words to Output Word List
- Standalone LLM Response Testing Script