The Memory Layer

Difficulty: HARDID: ai-memory-layer

The Scenario

Your agent runs on a high-end model (Gemini 3.0) with a 1M+ token context window. Context overflow is no longer the problem—cost and reasoning quality are.

Every request costs $0.05 because you're sending 50,000 tokens of "filler" chat history. The model is starting to hallucinate because of noise in the history.

The Goal

Implement a Tiered Memory System that prioritizes Information Density:

  1. Extract Facts: Scan old messages for key entities (e.g., User's Name, Hobbies) and move them to a 'Fact Store'.
  2. KV Cache Alignment: Structure your prompt so the "System" and "Fact Store" blocks effectively use the provider's prompt caching.
  3. Recent Window: Keep only the last 10 messages as raw text.

Requirements:

  • Implement build_smart_context(system_prompt, history, user_input, fact_store).
  • Store facts in the fact_store dict (passed as parameter).
  • Build context in order: System Prompt → Facts → Recent History → User Input.
  • Ensure "Alice" and "sailing" are retrievable 100+ messages later.
solution.py
Loading...
⚠️ Do not include PII or secrets in your code.
SYSTEM_LOGS
5/5
// Waiting for execution trigger...
PREVIEW MODE — SOLVE PREVIOUS MISSIONS TO UNLOCK