Helicone OSS LLM Observability home pagelight logodark logo
  • Discord
  • Github
  • Dashboard
  • Dashboard
  • Documentation
  • API Reference
  • Guides
  • FAQ
  • Getting Started
    • Quickstart
    • Platform Overview
    AI Gateway
    NEW
    • Overview
    • Provider Routing
    • Features
    Observability & Analytics
    • Custom Properties
    • Sessions
    • User Metrics
    • Cost Tracking
    • Datasets
    • Alerts
    • Reports
    Prompt Management
    • Prompt Management
    • Eval Scores
    • User Feedback
    • Webhooks
    Alternative Integrations
    • Overview
    • Provider-Specific
    • Frameworks
    • Async Logging
    • Tools
    • Vector DB
    References
    • Self-Hosting
    • Cost Calculation
    • Data Security & Privacy
    • Header Directory
    • Latency
    • Open Source
    • Proxy vs Async
    • Reliability
    • Legacy Features
    On this page
    • Cloud AI Gateway Features
    • Core Gateway Features

    Cloud AI Gateway

    Documentation for Helicone’s cloud-based AI Gateway

    ​
    Cloud AI Gateway Features

    Here’s the complete documentation for all available features:

    ​
    Core Gateway Features

    Custom Rate Limits

    Configure request rate limiting and spending controls for your applications

    Gateway Fallbacks

    Set up automatic fallback providers when your primary LLM provider fails

    LLM Security

    Implement security measures and content filtering for your LLM requests

    Content Moderation

    Automatically detect and filter inappropriate content in requests and responses

    Automatic Retries

    Configure retry logic for failed requests with exponential backoff

    Was this page helpful?

    Suggest editsRaise issue
    twitterlinkedingithubdiscord
    Powered by Mintlify
    Assistant
    Responses are generated using AI and may contain mistakes.