The Context Squeeze
Difficulty: MEDIUMID: ai-context-overflow-001
The Scenario
Your chatbot has a 1000 token limit. Users keep pasting 50-page PDFs, crashing the API with context_length_exceeded.
The Problem
You are appending every message to the history list.
history.append(user_input) # 💥 Grows forever
llm.chat(history)
After a few exchanges with long documents, you exceed the limit and crash.
The Goal
Implement a Sliding Window strategy:
- Always keep the System Prompt (first message)
- Always keep the Latest User Message
- Truncate the middle history to fit within token limits
Requirements:
- Target token limit: 1000 tokens
- Must preserve system prompt
- Must include latest user message
- Truncate old messages from history
solution.py
Loading...
⚠️ Do not include PII or secrets in your code.
SYSTEM_LOGS
5/5
// Waiting for execution trigger...
PREVIEW MODE — SOLVE PREVIOUS MISSIONS TO UNLOCK