This course walks learners through the foundation of the LLM prediction game: how the game works, how prompts are structured, how they are generated for each day, and how LLM responses are used and tokenized. By the end of this course, you’ll have a self-contained daily dataset powering the game logic.
Overview
Syllabus
- Unit 1: LLM Prediction Game Overview
- Unit 2: Generating Prompt Data
- Expand Breakpoints and Noun Phrases in Prompt Generator
- Create Question Array with map
- Generate 365 Daily Prompts Array
- Save Generated Prompts to data.json
- Unit 3: Selecting the Daily Prompt
- Implement loadAllPrompts Function for Prediction Game
- Implement the _getDayOfYear Function
- Validate Prompt Count in getDailyPrompt
- Implement Daily Prompt Selection Logic
- CLI Test Script for Prompt Selection System
- Unit 4: LLM Completion in JavaScript
- Initialize OpenAI Client in llm.js
- Implement getLlmResponseWords Function
- Add Error Handling to getLlmResponseWords Function
- Implement the _splitIntoWords Word Tokenizer
- Modify getLlmResponseWords to Return an Array of Words
- Standalone LLM Response Test Script