Cookbook
RAG in 15 lines
createRAG with a file loader and in-memory vector store. Working retrieval in under a screen of code.
RAG does not require a vector database, a cluster, or a PhD. Start here, swap pieces later.
import { createRAG } from '@agentskit/rag'
import { fileLoader } from '@agentskit/rag/loaders'
import { inMemoryStore } from '@agentskit/memory/vector'
const rag = createRAG({
store: inMemoryStore(),
loaders: [fileLoader({ glob: './docs/**/*.md' })],
embed: { model: 'text-embedding-3-small' },
})
await rag.ingest() // chunks + embeds every doc
const context = await rag.retrieve('how do streams work?', { topK: 5 })Tip
Swap inMemoryStore() for lanceStore({ path: './vectors.lance' }) when your corpus outgrows RAM. Zero code change anywhere else.
⚡ Performance
rag.ingest() is idempotent — reruns only re-embed chunks whose source content changed. Safe to call on every deploy.
Explore nearby
- PeerCookbook
Copy-paste recipes for the things every agent app needs. Each recipe stands on its own.
- PeerStreaming chat
useChat + abort + back-pressure. The minimum viable streaming chat, production-ready.
- PeerTools + memory together
The "chat with state and actions" loop — persistent memory plus tool execution.