Skip to content
EN DE

Defining Tools

Tools give an LLM the ability to perform actions beyond text generation. In AI SDK, you define a tool with the tool() function, which takes a description, an inputSchema (Zod or JSON Schema), and an async execute function.

The LLM never runs your code directly. It generates a structured tool call with arguments matching your schema. AI SDK validates the arguments, runs your execute function, and feeds the result back to the LLM so it can formulate a response.

AI SDK supports three tool categories: Custom Tools (you define schema + execute), Provider-Defined Tools (schema + execute from a provider like OpenAI), and Provider-Executed Tools (execution happens on the provider side, e.g., web search). Most of the time you work with custom tools.

import { tool } from 'ai';
import { z } from 'zod';
const weatherTool = tool({
description: 'Get the weather in a location',
inputSchema: z.object({
location: z.string().describe('The city name'),
unit: z
.enum(['celsius', 'fahrenheit'])
.optional()
.describe('Temperature unit'),
}),
execute: async ({ location, unit }) => ({
location,
temperature: 22,
unit: unit ?? 'celsius',
}),
});

Pass tools as a named object to generateText or streamText. The object keys become the tool names visible to the LLM:

import { generateText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
const result = await generateText({
model: anthropic('claude-sonnet-4-5-20250514'),
tools: { weather: weatherTool },
prompt: 'What is the weather in Berlin?',
});
PropertyTypeRequiredDescription
descriptionstringYesTells the LLM when and why to use this tool
inputSchemaZodSchema | JSONSchemaYesValidates and types the tool arguments
execute(args) => Promise<T>Yes (custom)Async function that performs the action
ValueBehavior
'auto'LLM decides whether to use a tool (default)
'required'LLM must call at least one tool
'none'Tools disabled, text-only response
{ type: 'tool', toolName: 'weather' }Forces a specific tool
  • Use .describe() on every Zod field — it helps the LLM understand what each parameter means.
  • Keep description precise. Vague descriptions lead to incorrect tool selection.
  • execute is async — you can make real API calls, query databases, or run any server-side code.

Part of AI Learning — free courses from prompt to production. Jan on LinkedIn