Beta: The AI Gateway is in beta. For production observability, see our standard integrations.
How does AI Gateway differ from standard integrations?
How does AI Gateway differ from standard integrations?
Our standard integrations provide production-ready observability by adding monitoring to your existing LLM setup. The AI Gateway goes further by replacing multiple provider SDKs with a single unified API, enabling automatic failover, intelligent routing, and seamless provider switching without code changes. We’re actively developing the AI Gateway as the future of Helicone, with migration paths planned for existing users when it reaches general availability.
Currently supporting BYOK (Bring Your Own Keys) and passthrough routing. Pass-through billing (PTB) for using Helicone’s API keys is coming soon.
Why Use AI Gateway?
One SDK for All Models
Use OpenAI SDK to access GPT, Claude, Gemini, and 100+ other models
Intelligent Routing
Automatic model fallbacks, cost optimization, and load balancing
Unified Observability
Track usage, costs, and performance across all providers in one dashboard
Prompt Management
Deploy and iterate prompts without code changes
Quick Example
Instead of managing multiple SDKs:Next Steps
Get Started in 5 Minutes
Set up AI Gateway and make your first request
Browse Model Registry
See all supported models and provider formats
Provider Routing
Configure automatic routing and fallbacks for reliability
Prompt Integration
Deploy and manage prompts through the gateway
Rate Limiting
Control usage and prevent abuse
Security Features
Protect your applications with built-in security
Want to integrate a new model provider to the AI Gateway? Check out our tutorial for detailed instructions.