Looking for the managed solution? Our cloud-hosted AI Gateway gets you started in 30 seconds with zero infrastructure. Try Cloud →
1

Configure provider secrets

To get started, you’ll need to configure the provider secrets for the providers you want to use.Set up your .env file with your PROVIDER_API_KEYs:
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
2

Start the Gateway

npx @helicone/ai-gateway@latest
The Gateway will be running on http://localhost:8080 and has three routes:
  • /ai for a standard OpenAI-compatible Unified API that works out of the box
  • /router/{router-id} for advanced Unified API with custom routing logic and load balancing
  • /{provider-name} for direct access to a specific provider without routing
3

Make your first request

Let’s start with a simple request to the pre-configured /ai route. Don’t worry, we’ll show you how to create custom routers next!
import { OpenAI } from "openai";

const openai = new OpenAI({
  baseURL: "http://localhost:8080/ai",
  apiKey: "fake-api-key", // Required by SDK, but gateway handles real auth
});

const response = await openai.chat.completions.create({
  model: "openai/gpt-4o-mini", // 100+ models available
  messages: [{ role: "user", content: "Hello, world!" }],
});

console.log(response);
Try switching models! Simply change the model parameter to "anthropic/claude-3-5-sonnet" to use Anthropic instead of OpenAI. Same API, different provider - that’s the power of the unified interface!
You’re all set! 🎉Your AI Gateway is now ready to handle requests across 100+ AI models!
4

Optional: Enable Helicone observability

Gain detailed tracing and insights into your AI usage directly from your Gateway.Just add the following environment variables to your Gateway configuration:
export HELICONE_CONTROL_PLANE_API_KEY=your-api-key

Next steps:

Great job getting your self-hosted Gateway running! Here are some important next steps: