Quickstart
Get started with Helicone AI Gateway in 1 minute
Helicone AI Gateway is currently only available as a self-hosted solution. Our cloud-based solution is coming soon.
Configure provider secrets
To get started, you’ll need to configure the provider secrets for the providers you want to use.
Simply export your API keys as environment variables:
Start the Gateway
The Gateway will be running on http://localhost:8080
and has three routes:
/ai
for a standard OpenAI-compatible Unified API that works out of the box/router/{router-name}
for advanced Unified API with custom routing logic and load balancing/{provider-name}
for direct access to a specific provider without routing
Make your first request
Let’s start with a simple request to the pre-configured /ai
route. Don’t worry, we’ll show you how to create custom routers next!
Try switching models! Simply change the model parameter to "anthropic/claude-3-5-sonnet"
to use Anthropic instead of OpenAI. Same API, different provider - that’s the power of the unified interface!
You’re all set! 🎉
Your AI Gateway is now ready to handle requests across 100+ AI models!
Optional: Enable Helicone observability
Gain detailed tracing and insights into your AI usage directly from your Gateway.
Just add the following environment variables to your Gateway configuration:
Next step:
Great job getting your Gateway started! The next step is making it work exactly how you want.
Interested in adding new providers, balancing request loads, or caching responses for efficiency?
Router Quickstart
Build custom routers with load balancing, caching, and multiple environments in 5 minutes