agentskit.js
Providers

ollama

Ollama — local LLMs on your laptop. Zero-cost, offline, private.

import { ollama } from '@agentskit/adapters'

const adapter = ollama({
  model: 'llama3.2',
  url: 'http://localhost:11434',
})

Options

OptionTypeDefault
modelstringrequired
urlstringhttp://localhost:11434
fetchtypeof fetchglobal

Why ollama

  • Zero config, zero cost, offline.
  • Supports tool calling for models that implement it (Llama 3.1+, Qwen 2.5+).
  • Install: curl -fsSL https://ollama.com/install.sh | sh.

Notes

  • Speed proportional to local GPU/Apple-Silicon throughput.
  • For team shared servers pair with createRouter to fail-over to a hosted adapter on load.
✎ Edit this page on GitHub·Found a problem? Open an issue →·How to contribute →

On this page