agentskit.js
Recipes

Adapter ensemble

Send one request to N models in parallel, aggregate the answers into a single output.

Ensembles are the "wisdom of crowds" move for LLMs: send the same request to several models, then combine their answers. Great for reducing variance on high-stakes outputs (classification labels, structured extraction, critical summaries).

Install

Comes with @agentskit/adapters.

Quick start — majority vote

import { createEnsembleAdapter, anthropic, openai } from '@agentskit/adapters'

const ensemble = createEnsembleAdapter({
  candidates: [
    { id: 'haiku', adapter: anthropic({ model: 'claude-haiku-4-5' }) },
    { id: 'sonnet', adapter: anthropic({ model: 'claude-sonnet-4-6' }) },
    { id: 'gpt-mini', adapter: openai({ model: 'gpt-4o-mini' }) },
  ],
  // aggregate: 'majority-vote' is the default
})

The resulting AdapterFactory emits a single text chunk carrying the aggregated answer followed by done, so it plugs into any runtime that expects a streaming adapter.

Aggregators

aggregateBehavior
'majority-vote' (default)Weighted vote — picks the text with the highest total weight (default 1 per candidate)
'concat'Joins all branches with \n---\n — useful for review workflows
'longest'Picks the branch with the most bytes of text
functionCustom: (branches) => string, sync or async
createEnsembleAdapter({
  aggregate: branches => {
    // Prefer a branch whose output parses as JSON.
    const parsed = branches.find(b => {
      try { JSON.parse(b.text); return true } catch { return false }
    })
    return parsed?.text ?? branches[0].text
  },
  candidates: [...],
})

Weighted vote

Give higher-quality models a stronger say.

createEnsembleAdapter({
  candidates: [
    { id: 'cheap', adapter: haikuFactory, weight: 1 },
    { id: 'premium', adapter: sonnetFactory, weight: 3 },
  ],
})

Timeouts and resilience

  • timeoutMs bounds every branch. A branch that times out is marked with an error and skipped.
  • If every branch fails, the adapter throws all ensemble branches failed — callers can retry or fall back.
  • onBranches fires once per request with every branch's raw output, so you can log, cost-meter, or audit.

See also

✎ Edit this page on GitHub·Found a problem? Open an issue →·How to contribute →

On this page