Helicone AI Gateway provides a unified API for 100+ LLM providers through the OpenAI SDK format. Instead of learning different SDKs and APIs for each provider, use one familiar interface to access any model with intelligent routing, automatic fallbacks, and complete observability.
Currently supporting BYOK (Bring Your Own Keys) and passthrough routing. Pass-through billing (PTB) for using Helicone’s API keys is coming soon.

Why Use AI Gateway?

One SDK for All Models

Use OpenAI SDK to access GPT, Claude, Gemini, and 100+ other models

Intelligent Routing

Automatic model fallbacks, cost optimization, and load balancing

Unified Observability

Track usage, costs, and performance across all providers in one dashboard

Prompt Management

Deploy and iterate prompts without code changes

Quick Example

Instead of managing multiple SDKs:
// ❌ Old way - multiple SDKs and endpoints
const openai = new OpenAI({ baseURL: "https://oai.helicone.ai/v1" });
const anthropic = new Anthropic({ baseURL: "https://anthropic.helicone.ai" });

// Switch providers = code changes
const openaiResponse = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [...]
});

const anthropicResponse = await anthropic.messages.create({
  model: "claude-3.5-sonnet",
  messages: [...] // Different message format!
});
Use one SDK for everything:
// ✅ New way - one SDK, all providers
const client = new OpenAI({
  baseURL: "https://ai-gateway.helicone.ai",
  apiKey: process.env.HELICONE_API_KEY,
});

// Switch providers = change model string
const response = await client.chat.completions.create({
  model: "gpt-4o-mini",  // Works with any model: claude-sonnet-4, gemini-2.5-flash, etc.
  messages: [{ role: "user", content: "Hello!" }]
});

Next Steps