Skip to content
EN DE

Level 2 Complete

Level 2 complete! You now understand the mechanics behind LLM costs and know how to control them. In the Boss Fight you combined all four building blocks into a Token Budget Calculator — that’s real production knowledge.

  • Tokens: LLMs don’t read text — they work with Subword Units (Token IDs). 1 token ≈ 4 characters (EN) / 3 characters (DE). Different languages and text types consume different amounts of tokens.
  • Usage Tracking: Every generateText call returns result.usage with promptTokens, completionTokens, and totalTokens. Cost calculation: tokens / 1M * price. Output tokens are 3-5x more expensive than input tokens.
  • Context Window: Everything sent to the LLM — System Prompt, messages, tool definitions AND space for output — must fit into the Context Window. When exceeded: error or silent truncation. Strategies: message truncation, token budget, summarization.
  • Prompt Caching: Identical prompt prefixes are read from cache after the first request — at Anthropic for 10% of the normal input cost. Track the cache hit rate for cost optimization.
Skill Tree — Level 2 LLM Fundamentals complete, Level 3 Agents & MCP is the next level

Level 3: Agents & MCP — From passive text generator to active agent. You’ll learn how LLMs call tools, how the tool call loop works, and how to connect external services with the Model Context Protocol (MCP). Your LLM will then be able to not only respond, but act.

Part of AI Learning — free courses from prompt to production. Jan on LinkedIn