From LangGraph
Side-by-side migration guide. Map LangGraph stateful graphs to AgentsKit runtime + topologies.
LangGraph encodes agent control flow as an explicit state machine β nodes,
edges, conditionals. AgentsKit reaches the same problem from the opposite
direction: a runtime that already knows how to loop, with composable
topologies (supervisor / swarm / hierarchical / blackboard) and
compileFlow for declarative DAGs. Migrate when:
- You want less ceremony for the common ReAct case β no nodes, no edges, just
runtime.run(task). - You need first-class durability for long workflows (
createDurableRunner) without spinning up a separate orchestrator. - You want the same adapter across every surface (terminal, CLI, runtime, React, Ink) instead of a graph wrapper around one model.
- You'd rather author a flow as YAML the team can review than as code (
agentskit flow).
Stay with LangGraph when:
- You depend on LangSmith's graph debugger and your team thinks visually in nodes/edges.
- You have dozens of conditional transitions that map naturally onto a state-machine view.
- You're already deep in the LangChain ecosystem and the integration cost outweighs the runtime simplification.
#Quick reference
| LangGraph | AgentsKit | Notes |
|---|---|---|
StateGraph(stateType) | createRuntime({ adapter, tools, memory }) | Most ReAct loops don't need an explicit graph. |
graph.add_node(name, fn) | A ToolDefinition or a SkillDefinition delegate | Nodes that wrap an LLM call become a delegate; nodes that wrap I/O become a tool. |
graph.add_edge('a', 'b') | Implicit in the runtime loop | The runtime already moves from "model thinks" β "tool runs" β "model continues". |
add_conditional_edges(...) | Custom delegate router or compileFlow needs: | Branching belongs in code, not in YAML; conditionals stay in a handler. |
graph.compile() | compileFlow({ definition, registry }) for DAGs | YAML or JSON object β durable runner. See Visual flows. |
Checkpointer | createDurableRunner({ store, runId }) | File or in-memory step log; reuse runId to resume. |
Send(...) (parallel fan-out) | swarm({ members }) or compileFlow parallel needs | Topologies wrap N agents into one. |
| Multi-agent (supervisor + workers) | supervisor({ supervisor, workers }) | Built-in. See Topologies. |
| Tool node | ToolDefinition registered on the runtime | Same JSON Schema 7 inputs. |
interrupt() | createApprovalGate (HITL) | Pause + resume on a human decision. |
#1. Basic ReAct loop
#Before β LangGraph
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
graph = StateGraph(AgentState)
graph.add_node("agent", call_model)
graph.add_node("tools", ToolNode(tools))
graph.add_edge("tools", "agent")
graph.add_conditional_edges("agent", should_continue, {"continue": "tools", "end": END})
graph.set_entry_point("agent")
app = graph.compile()#After β AgentsKit
import { createRuntime } from '@agentskit/runtime'
import { openai } from '@agentskit/adapters'
import { webSearch } from '@agentskit/tools'
const runtime = createRuntime({
adapter: openai({ apiKey: KEY, model: 'gpt-4o-mini' }),
tools: [webSearch()],
maxSteps: 10,
})
await runtime.run('Find the top 3 results for "agent frameworks 2026"')The agent / tool alternation, "should continue" decision, and entry point are all the runtime's job. You configure capacity (maxSteps) and content (tools, memory, skill).
#2. Stateful graph β durable runner
#Before β LangGraph
from langgraph.checkpoint.sqlite import SqliteSaver
memory = SqliteSaver.from_conn_string("checkpoints.db")
app = graph.compile(checkpointer=memory)
config = {"configurable": {"thread_id": "user-42"}}
result = await app.ainvoke({"messages": [...]}, config)#After β AgentsKit
import { createDurableRunner, createFileStepLog } from '@agentskit/runtime'
const store = await createFileStepLog('.agentskit/runs.jsonl')
const runner = createDurableRunner({ store, runId: 'user-42' })
await runner.step('plan', () => runtime.run(task))
await runner.step('act', () => runtime.run(followUp))Each runner.step(id, fn) is recorded; replays short-circuit on resume. Crash, deploy, or retry β the same runId picks up where it stopped. See Durable execution.
#3. Conditional graphs β flow YAML
LangGraph excels at "if state.x then go to node Y" chains. When that's the actual shape, AgentsKit ships compileFlow:
name: nightly-refresh
nodes:
- id: fetch
run: http.get
with: { url: https://api.example.com/items }
- id: parse
run: json.parse
needs: [fetch]
- id: write
run: cache.write
needs: [parse]import { compileFlow } from '@agentskit/runtime'
const compiled = compileFlow({
definition,
registry: {
'http.get': ({ with: w }) => fetch(w.url as string).then(r => r.text()),
'json.parse': ({ deps }) => JSON.parse(deps.fetch as string),
'cache.write': ({ deps }) => cache.set('items', deps.parse),
},
})
await compiled.run(input, { runId: 'nightly', store })Branching inside a node stays in code β flow YAML stays linear (no if, no expressions) so it's reviewable. See Visual flows.
#4. Multi-agent β supervisor / swarm
#LangGraph supervisor pattern
graph = StateGraph(...)
graph.add_node("supervisor", supervisor_fn)
graph.add_node("researcher", researcher_node)
graph.add_node("coder", coder_node)
graph.add_conditional_edges("supervisor", route_to_worker, {...})#AgentsKit
import { supervisor } from '@agentskit/runtime'
import { researcher, coder } from '@agentskit/skills'
const research = createRuntime({ adapter, skill: researcher, tools: [...] })
const code = createRuntime({ adapter, skill: coder, tools: [...] })
const ensemble = supervisor({
supervisor: { name: 'lead', run: task => leadRuntime.run(task).then(r => r.content) },
workers: [
{ name: 'research', run: task => research.run(task).then(r => r.content) },
{ name: 'code', run: task => code.run(task).then(r => r.content) },
],
maxRounds: 3,
})
await ensemble.run('Build a landing page about quantum computing')swarm, hierarchical, and blackboard topologies are interchangeable β they all return an AgentHandle. See Topologies.
#5. Human-in-the-loop
LangGraph: interrupt() + a separate channel to deliver the answer.
AgentsKit:
import { createApprovalGate, createInMemoryApprovalStore } from '@agentskit/core/hitl'
const gate = createApprovalGate({ store: createInMemoryApprovalStore() })
await gate.open({ id: 'approve-spend', payload: { amountUsd: 42 } })
// somewhere else: await gate.decide('approve-spend', 'approved')
const decision = await gate.await('approve-spend', { timeoutMs: 5 * 60_000 })See HITL approvals.
#6. Streaming
import { createRuntime } from '@agentskit/runtime'
const runtime = createRuntime({
adapter,
observers: [
{ name: 'log', on: e => e.type === 'text' && process.stdout.write((e as { content: string }).content) },
],
})Same observer plugs into LangSmith / OpenTelemetry sinks if you want to keep your existing dashboards.
#Incremental migration
You don't have to port the whole graph at once:
- Keep your LangGraph app for the critical path.
- Wrap one node as an AgentsKit runtime β point it at the same provider, see how the loop simplifies.
- Port a parallel fan-out to
swarmorcompileFlowβ easy win since LangGraph'sSendis verbose. - Move durable runs to
createDurableRunnerwhen the checkpointer story stops being enough.
#Related
- Visual flows β the YAML DAG counterpart.
- Topologies β supervisor / swarm / hierarchical / blackboard.
- Durable execution β Temporal-style step logs.
- From LangChain β companion guide for the LangChain runtime layer.
Explore nearby
- PeerMigrating to AgentsKit
Guides for moving from other frameworks. Honest about what transfers easily and what doesn't.
- PeerFrom Vercel AI SDK
Side-by-side migration guide. Map your Vercel AI SDK code to AgentsKit β with honest callouts about where each wins.
- PeerFrom LangChain.js
Side-by-side migration guide. Map your LangChain.js code to AgentsKit β with honest callouts about where LangChain still fits.