Use any LLM provider through a single OpenAI-compatible API with intelligent routing, fallbacks, and unified observability
Helicone AI Gateway provides a unified API for 100+ LLM providers through the OpenAI SDK format. Instead of learning different SDKs and APIs for each provider, use one familiar interface to access any model with intelligent routing, automatic fallbacks, and complete observability.
Currently supporting BYOK (Bring Your Own Keys) and passthrough routing. Pass-through billing (PTB) for using Helicone’s API keys is coming soon.
// ❌ Old way - multiple SDKs and endpointsconst openai = new OpenAI({ baseURL: "https://oai.helicone.ai/v1" });const anthropic = new Anthropic({ baseURL: "https://anthropic.helicone.ai" });// Switch providers = code changesconst openaiResponse = await openai.chat.completions.create({ model: "gpt-4o", messages: [...]});const anthropicResponse = await anthropic.messages.create({ model: "claude-3.5-sonnet", messages: [...] // Different message format!});
Use one SDK for everything:
Copy
Ask AI
// ✅ New way - one SDK, all providersconst client = new OpenAI({ baseURL: "https://ai-gateway.helicone.ai", apiKey: process.env.HELICONE_API_KEY,});// Switch providers = change model stringconst response = await client.chat.completions.create({ model: "gpt-4o-mini", // Works with any model: claude-sonnet-4, gemini-2.5-flash, etc. messages: [{ role: "user", content: "Hello!" }]});