Documentation for Helicone’s legacy cloud-based AI Gateway
This cloud AI Gateway is being deprecated. While it remains available for existing users, we are no longer adding new features or major updates to this version.
We’re building a new cloud-hosted AI Gateway based on our improved self-hosted AI Gateway. The new version offers:
Timeline:
While you’re using the current cloud gateway, here’s the complete documentation for all available features:
Configure request rate limiting and spending controls for your applications
Set up automatic fallback providers when your primary LLM provider fails
Implement security measures and content filtering for your LLM requests
Automatically detect and filter inappropriate content in requests and responses
Configure retry logic for failed requests with exponential backoff