From Vercel AI SDK
Side-by-side migration guide. Map your Vercel AI SDK code to AgentsKit — with honest callouts about where each wins.
Vercel AI SDK is an excellent chat SDK. If you're on it and happy, stay. Come to AgentsKit when you hit one of these:
- You need a real agent runtime (ReAct loop, tools+skills+memory+delegation) without writing it yourself
- You want to swap providers in one line, not rewrite route handlers
- You want formal contracts for adapters, tools, memory, retrieval, skills, runtime
- You need terminal, CLI, or headless surfaces — Vercel AI SDK is React-first
This page maps common Vercel AI SDK patterns to their AgentsKit equivalents.
Quick reference
| Vercel AI SDK | AgentsKit | Notes |
|---|---|---|
streamText({ model, messages }) | adapter.createSource({ messages }).stream() | Adapter is the seam; see Concepts: Adapter |
useChat() (App Router) | useChat({ adapter }) from @agentskit/react | Same name, different shape. See below. |
tool({ description, parameters, execute }) | ToolDefinition with schema (JSON Schema 7) | Zod → JSON Schema via zod-to-json-schema |
generateText({ ..., maxSteps }) | createRuntime({ ..., maxSteps }).run(task) | AgentsKit's runtime has memory, retrieval, delegation built in |
experimental_StreamData | observers[] or StreamChunk.metadata | Observers are read-only, composable |
openai('gpt-4o') | openai({ apiKey, model: 'gpt-4o' }) | AgentsKit adapters take an options object |
Route handler (app/api/chat/route.ts) | Client hook uses adapter directly, or server action | AgentsKit works in RSC, route handlers, edge, Node |
1. Basic streaming chat (client hook)
Before — Vercel AI SDK
// app/chat.tsx
'use client'
import { useChat } from 'ai/react'
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: '/api/chat',
})
return (
<form onSubmit={handleSubmit}>
{messages.map(m => (
<div key={m.id}>{m.role}: {m.content}</div>
))}
<input value={input} onChange={handleInputChange} />
</form>
)
}// app/api/chat/route.ts
import { streamText } from 'ai'
import { openai } from '@ai-sdk/openai'
export async function POST(req: Request) {
const { messages } = await req.json()
const result = streamText({ model: openai('gpt-4o'), messages })
return result.toDataStreamResponse()
}After — AgentsKit
// app/chat.tsx
'use client'
import { useChat, ChatContainer, Message, InputBar } from '@agentskit/react'
import { openai } from '@agentskit/adapters'
import '@agentskit/react/theme'
const adapter = openai({ apiKey: KEY, model: 'gpt-4o' })
export default function Chat() {
const chat = useChat({ adapter })
return (
<ChatContainer>
{chat.messages.map(m => <Message key={m.id} message={m} />)}
<InputBar chat={chat} />
</ChatContainer>
)
}What's different
- No route handler required for client-side-only demos — the adapter calls the provider directly. For production, proxy through a server action or route handler so your key isn't in the browser.
- Components are provided (
ChatContainer,Message,InputBar) but they're headless; swap them for your own freely. - Return type of
useChatisChatReturn, not the Vercel AI SDK's shape. Keys you'll care about:messages,input,setInput,send,stop,status,retry.
2. Securing the API key (server side)
Vercel AI SDK pushes you toward a route handler. AgentsKit supports route handlers, server actions, or a proxied adapter — your choice.
Server action variant (recommended)
// app/actions.ts
'use server'
import { openai } from '@agentskit/adapters'
import { createRuntime } from '@agentskit/runtime'
const runtime = createRuntime({
adapter: openai({ apiKey: process.env.OPENAI_API_KEY!, model: 'gpt-4o' }),
})
export async function ask(prompt: string) {
const result = await runtime.run(prompt)
return result.content
}// app/page.tsx
'use client'
import { ask } from './actions'
// Call ask(prompt) from a form or event handler; stream via RSC SuspenseRoute handler variant (same-shape migration)
If you prefer to keep a /api/chat endpoint, proxy through a route handler that yields StreamChunk as text chunks. See apps/example-react in the repo for a working reference.
3. Tool calling
Before — Vercel AI SDK
import { tool, streamText } from 'ai'
import { z } from 'zod'
const weatherTool = tool({
description: 'Get the weather for a city',
parameters: z.object({ city: z.string() }),
execute: async ({ city }) => {
const res = await fetch(`https://wttr.in/${city}?format=j1`)
return await res.json()
},
})
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: { weather: weatherTool },
maxSteps: 5,
})After — AgentsKit
import type { ToolDefinition } from '@agentskit/core'
import { createRuntime } from '@agentskit/runtime'
import { openai } from '@agentskit/adapters'
const weatherTool: ToolDefinition = {
name: 'weather',
description: 'Get the weather for a city',
schema: {
type: 'object',
properties: { city: { type: 'string' } },
required: ['city'],
},
async execute(args) {
const res = await fetch(`https://wttr.in/${args.city}?format=j1`)
return await res.json()
},
}
const runtime = createRuntime({
adapter: openai({ apiKey: KEY, model: 'gpt-4o' }),
tools: [weatherTool],
maxSteps: 5,
})Differences to notice
- Tools use JSON Schema 7, not Zod. Convert with
zodToJsonSchema(yourZodSchema)if you want to keep Zod as source. - Tool
nameis explicit (it's an identity — see ADR 0002 T1). maxStepsis a hard cap, not a suggestion. Pick a generous number; the runtime will not loop past it.- You can set
requiresConfirmation: trueand wireonConfirmon the runtime — Vercel AI SDK doesn't formalize this.
4. Multi-provider swap
Before — Vercel AI SDK
// route handler has to know which SDK to import
import { openai } from '@ai-sdk/openai'
import { anthropic } from '@ai-sdk/anthropic'
const model = useClaude ? anthropic('claude-sonnet-4-6') : openai('gpt-4o')
const result = streamText({ model, messages })After — AgentsKit
import { openai, anthropic } from '@agentskit/adapters'
const adapter = useClaude
? anthropic({ apiKey, model: 'claude-sonnet-4-6' })
: openai({ apiKey, model: 'gpt-4o' })
useChat({ adapter }) // or createRuntime({ adapter })Also possible and not possible in Vercel AI SDK: router and ensemble adapters. Coming in Fase 3 (#145, #146).
5. Agent runtime with memory + retrieval
This is the biggest win. Vercel AI SDK has no runtime — you write the loop yourself. AgentsKit has one, with formal contracts.
Vercel AI SDK pattern
// You write this yourself — ~50 lines, easy to get wrong
async function runAgent(task: string) {
const messages = [{ role: 'user', content: task }]
for (let step = 0; step < 10; step++) {
const result = await generateText({ model, messages, tools })
// parse tool calls, execute, append results, decide when to stop...
}
}AgentsKit
import { createRuntime } from '@agentskit/runtime'
import { openai } from '@agentskit/adapters'
import { webSearch, filesystem } from '@agentskit/tools'
import { sqliteChatMemory } from '@agentskit/memory'
import { createRAG } from '@agentskit/rag'
import { fileVectorMemory } from '@agentskit/memory'
import { openaiEmbed } from '@agentskit/adapters'
const rag = createRAG({
store: fileVectorMemory({ path: './embeddings.json' }),
embed: openaiEmbed({ apiKey: KEY, model: 'text-embedding-3-small' }),
})
const runtime = createRuntime({
adapter: openai({ apiKey: KEY, model: 'gpt-4o' }),
tools: [webSearch(), ...filesystem({ basePath: './workspace' })],
memory: sqliteChatMemory({ path: './sessions/user-42.db' }),
retriever: rag,
maxSteps: 10,
})
const result = await runtime.run('Research the top 3 AI frameworks and save a summary')Memory load + save, retrieval per turn, tool resolution, error categorization, abort semantics — all handled by the runtime per ADR 0006.
6. Multi-agent / delegation
Vercel AI SDK does not ship this. You'd implement it yourself.
import { planner, researcher, coder } from '@agentskit/skills'
await runtime.run('Build a landing page about quantum computing', {
skill: planner,
delegates: {
researcher: { skill: researcher, tools: [webSearch()], maxSteps: 3 },
coder: { skill: coder, tools: [...filesystem({ basePath: './src' })], maxSteps: 8 },
},
})See Recipe: Multi-agent research team for the full walkthrough.
7. Observability / telemetry
Vercel AI SDK
const result = streamText({
model,
messages,
onFinish: ({ usage }) => console.log(usage),
experimental_telemetry: { isEnabled: true },
})AgentsKit
import type { Observer } from '@agentskit/core'
const telemetry: Observer = {
onModelStart: () => console.time('model'),
onRunEnd: (result) => {
console.timeEnd('model')
console.log(`${result.steps} steps, ${result.toolCalls.length} tool calls`)
},
onChunk: (chunk) => {
if (chunk.metadata?.usage) console.log('usage:', chunk.metadata.usage)
},
}
createRuntime({ adapter, tools, observers: [telemetry] })Observers are composable (array), read-only, and first-class in the contract (RT9). Plug in LangSmith, OpenTelemetry, PostHog, or your own logger.
Where Vercel AI SDK still wins
Honest callouts — choose Vercel AI SDK over AgentsKit when:
- You need
generateObjectwith strict schema output right now. AgentsKit will ship a structured-output primitive in Fase 2; today you'd handle it manually. - You're shipping a consumer chat SDK and want the smallest possible footprint with no agent concepts. Vercel AI SDK is purpose-built for that.
- Your team has deep investment in the Vercel ecosystem (AI Elements, Vercel AI Gateway, etc.) and doesn't need a runtime.
- You want v1.0-stability right now. Vercel AI SDK is post-1.0; AgentsKit is pre-1.0 (with formal contracts already locked — but that's not the same as v1.0.0).
If none of those apply and you want the runtime + contracts, migrate.
Incremental migration
You don't have to migrate everything at once:
- Keep your Vercel AI SDK route handler for existing endpoints
- Add AgentsKit for new features — a CLI, a terminal chat, an autonomous agent
- Port the chat hook when you want
@agentskit/react's components - Consolidate when the runtime becomes the hub — one adapter instance across React, Ink, runtime
The Adapter contract (ADR 0001) is the pivot point: once you have one AdapterFactory, every surface in AgentsKit accepts it.
Related
- README — When you should NOT use AgentsKit — the honest decision matrix
- Concepts: Adapter — why this migration is mostly a rename
- Recipe: Chat with RAG — the equivalent of Vercel AI SDK + LangChain stitched together, in 30 lines