Challenge 5.2: Basic Prompting
What is the difference between “Summarise this” and a prompt with a clear role, context, and format specification?
OVERVIEW
Section titled “OVERVIEW”Left: A vague prompt leads to unpredictable results. Right: A structured prompt with XML tags delivers consistent, reproducible outputs.
Without structure: A different quality every time. The LLM does not know which format you want, how long the answer should be, or what role it has. Sometimes you get a list of 10 items, sometimes a paragraph, sometimes a question back. No control, no reproducibility.
With XML tags: The prompt is divided into clearly separated sections. The LLM knows exactly what its task is, which rules apply, and what the output should look like. The results are reproducible, measurable, and iterable.
WALKTHROUGH
Section titled “WALKTHROUGH”Layer 1: The XML Tag Structure
Section titled “Layer 1: The XML Tag Structure”A prompt template based on Anthropic best practices uses XML tags to clearly separate different prompt sections. Each tag has a specific purpose:
<task-context>Who is the LLM? What is its task?Placed at the START — high influence on overall behaviour.</task-context>
<background-data>Documents, context, background information.Placed in the MIDDLE — this is where the data goes.</background-data>
<rules>Detailed instructions and constraints.Placed in the MIDDLE.</rules>
<the-ask>The actual question or task.Placed at the END — high influence on concrete execution.</the-ask>
<output-format>How should the answer be formatted?Placed at the END — controls the output.</output-format>Why this order? LLMs weigh the beginning and end of a prompt more heavily than the middle. Critical instructions therefore belong at the edges.
Layer 2: Each Tag Explained with an Example
Section titled “Layer 2: Each Tag Explained with an Example”Let us take a concrete scenario: You want to build a Chat Title Generator. Given a conversation, the LLM should produce a short title.
<task-context> — Defines the role and scope:
<task-context>You are a helpful assistant that generates titles for conversations.</task-context><rules> — Defines concrete constraints:
<rules>Find the most concise title that captures the essence of the conversation.Titles should be at most 30 characters.Titles should be formatted in title case.Do not provide a period at the end.</rules><the-ask> — The actual task:
<the-ask>Generate a title for the conversation.</the-ask><output-format> — Controls the output:
<output-format>Return only the title. No additional text, no explanation, no quotes.</output-format>Layer 3: Turning a Bad Prompt into a Good One
Section titled “Layer 3: Turning a Bad Prompt into a Good One”Before — vague prompt without structure:
import { streamText } from 'ai';import { anthropic } from '@ai-sdk/anthropic';
const INPUT = `Do some research on induction hobs and how I can replacea 100cm wide AGA cooker with an induction range cooker.Which is the cheapest, which is the best?`;
// Schlecht: Das LLM weiss nicht, was fuer ein Titel, wie lang, welches Formatconst result = await streamText({ model: anthropic('claude-sonnet-4-5-20250514'), prompt: `Generate me a title: ${INPUT}`,});After — structured prompt with XML tags:
import { streamText } from 'ai';import { anthropic } from '@ai-sdk/anthropic';
const INPUT = `Do some research on induction hobs and how I can replacea 100cm wide AGA cooker with an induction range cooker.Which is the cheapest, which is the best?`;
const result = await streamText({ model: anthropic('claude-sonnet-4-5-20250514'), prompt: `<task-context>You are a helpful assistant that generates titles for conversations.</task-context>
<conversation-history>${INPUT}</conversation-history>
<rules>Find the most concise title that captures the essence of the conversation.Titles should be at most 30 characters.Titles should be formatted in title case.Do not provide a period at the end.</rules>
<the-ask>Generate a title for the conversation.</the-ask>
<output-format>Return only the title. No additional text, no explanation, no quotes.</output-format> `.trim(),});
for await (const chunk of result.textStream) { process.stdout.write(chunk);}The difference: The structured prompt has clear sections. The LLM knows it is a title generator (task-context), how the title should look (rules), what to do (the-ask), and what to return (output-format). The result is reproducible — the same prompt consistently produces short titles in title case.
Note: In this example, the entire prompt (including
<task-context>) is in thepromptparameter. In production, you would put<task-context>and<rules>in thesystemparameter and only<background-data>,<the-ask>, and user input in thepromptparameter. Challenge 5.1 shows this pattern withbuildSystemPrompt().
Task: Take the bad prompt below and restructure it with XML tags. Goal: The LLM should summarise an article in exactly 3 sentences.
Create challenge-5-2.ts and run with: npx tsx challenge-5-2.ts
import { generateText } from 'ai';import { anthropic } from '@ai-sdk/anthropic';
const ARTICLE = `TypeScript 5.8 introduces several improvements to typeinference and control flow analysis. The new release includes support forgranular checks on branches in return expressions, allowing the compilerto better narrow types. Additionally, the release adds support forrequiring return type annotations via the new --isolatedDeclarationsflag, making it easier to generate declaration files from source codewithout type-checking.`;
// TODO: Strukturiere diesen Prompt mit XML Tags// Der schlechte Prompt:const result = await generateText({ model: anthropic('claude-sonnet-4-5-20250514'), prompt: `Fasse den Text zusammen: ${ARTICLE}`,});
// TODO: Ersetze den Prompt oben durch einen strukturierten Prompt mit:// 1. <task-context> — Definiere die Rolle (z.B. technischer Redakteur)// 2. <rules> — Definiere Einschraenkungen (genau 3 Saetze, Deutsch, keine Wertung)// 3. <the-ask> — Was genau soll das LLM tun?// 4. <output-format> — Nur die Zusammenfassung, kein zusaetzlicher Text
console.log(result.text);Checklist:
- Prompt uses at least 3 XML tags
-
<task-context>defines the role -
<rules>defines constraints -
<output-format>defines the expected format
Show solution
import { generateText } from 'ai';import { anthropic } from '@ai-sdk/anthropic';
const ARTICLE = `TypeScript 5.8 introduces several improvements to typeinference and control flow analysis. The new release includes support forgranular checks on branches in return expressions, allowing the compilerto better narrow types. Additionally, the release adds support forrequiring return type annotations via the new --isolatedDeclarationsflag, making it easier to generate declaration files from source codewithout type-checking.`;
const result = await generateText({ model: anthropic('claude-sonnet-4-5-20250514'), prompt: `<task-context>Du bist ein technischer Redakteur, der Artikel fuer Entwickler zusammenfasst.</task-context>
<background-data>${ARTICLE}</background-data>
<rules>- Fasse den Artikel in genau 3 Saetzen zusammen- Schreibe auf Deutsch- Bleibe sachlich, keine Wertung- Verwende die korrekten Fachbegriffe aus dem Original</rules>
<the-ask>Fasse den obenstehenden Artikel zusammen.</the-ask>
<output-format>Gib nur die Zusammenfassung zurueck. Keine Ueberschrift, keine Einleitung,keine zusaetzliche Erklaerung.</output-format> `.trim(),});
console.log(result.text);Explanation: The prompt defines a clear role (technischer Redakteur), provides the article as <background-data>, sets concrete rules (3 sentences, German, factual), and controls the output format. The LLM knows exactly what is expected.
Expected output (approximate):
TypeScript 5.8 verbessert die Typinferenz und Kontrollflusanalyse. Die neue Version unterstuetzt granulare Pruefungen bei Return-Ausdruecken. Zusaetzlich wird der --isolatedDeclarations-Flag eingefuehrt.COMBINE
Section titled “COMBINE”Exercise: Combine the structured prompt with the template from Challenge 5.1. Build a buildPrompt function that unifies both:
- Use
buildSystemPrompt()from Challenge 5.1 for thesystemparameter - Build a
buildUserPrompt(input: string): stringfunction that wraps the user input in XML tags (<background-data>,<the-ask>,<output-format>) - Call
generateTextwith both together
Optional Stretch Goal: Make the rules (<rules>) configurable — add them as a parameter to the buildPrompt function.