Skip to content
EN DE

Start and Finish Parts

When streamText runs, each generation step emits a step-start part at the beginning and a step-finish part at the end. A “step” is one LLM call — in a multi-step scenario with tool calls, a single streamText invocation can produce multiple steps (controlled by maxSteps).

The step-finish part carries finishReason (why the step ended), usage (tokens for this step), and isContinued (whether the next step continues the same text). This information is critical for monitoring, cost tracking, and understanding multi-step behavior.

Two callbacks provide the same data in a more convenient form: onStepFinish fires after each individual step, and onFinish fires once after all steps are complete with aggregated totals.

Emitted when a generation step begins. In fullStream:

for await (const part of result.fullStream) {
if (part.type === 'step-start') {
console.log('New generation step started');
}
}

Emitted when a generation step completes:

for await (const part of result.fullStream) {
if (part.type === 'step-finish') {
console.log('Reason:', part.finishReason);
console.log('Tokens:', part.usage.totalTokens);
console.log('Continued:', part.isContinued);
}
}
ValueMeaning
'stop'LLM finished naturally
'length'Token limit reached (response may be truncated)
'content-filter'Content was filtered by the provider
'tool-calls'LLM wants to call tools (step continues)
'error'An error occurred
'other'Provider-specific reason
import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
const result = streamText({
model: anthropic('claude-sonnet-4-5-20250514'),
prompt: 'What is the weather in Berlin and Tokyo?',
tools: { weather: weatherTool },
maxSteps: 5,
// Called after EACH step
onStepFinish({ stepType, finishReason, usage }) {
console.log(`Step (${stepType}): ${finishReason}`);
console.log(` Tokens: ${usage.totalTokens}`);
},
// Called once after ALL steps
onFinish({ text, totalUsage, steps, finishReason }) {
console.log(`Done. Reason: ${finishReason}`);
console.log(`Total tokens: ${totalUsage.totalTokens}`);
console.log(`Steps: ${steps.length}`);
},
});
PropertyTypeDescription
stepType'initial' | 'continue' | 'tool-result'What triggered this step
finishReasonFinishReasonWhy the step ended
usage{ promptTokens, completionTokens, totalTokens }Token usage for this step
textstringText generated in this step
toolCallsToolCall[]Tool calls made in this step
toolResultsToolResult[]Tool results from this step
PropertyTypeDescription
textstringComplete generated text across all steps
finishReasonFinishReasonFinal finish reason
usageTokenUsageToken usage of the last step
totalUsageTokenUsageAggregated tokens across all steps
stepsStep[]Array of all step results

Part of AI Learning — free courses from prompt to production. Jan on LinkedIn