Providers
ollama
Ollama — local LLMs on your laptop. Zero-cost, offline, private.
import { ollama } from '@agentskit/adapters'
const adapter = ollama({
model: 'llama3.2',
url: 'http://localhost:11434',
})Options
| Option | Type | Default |
|---|---|---|
model | string | required |
url | string | http://localhost:11434 |
fetch | typeof fetch | global |
Why ollama
- Zero config, zero cost, offline.
- Supports tool calling for models that implement it (Llama 3.1+, Qwen 2.5+).
- Install:
curl -fsSL https://ollama.com/install.sh | sh.
Notes
- Speed proportional to local GPU/Apple-Silicon throughput.
- For team shared servers pair with
createRouterto fail-over to a hosted adapter on load.