The AI Gateway securely manages API keys and sensitive credentials for all configured LLM providers using environment variables with automatic discovery and secure handling.

Benefits:

  • Centralize credential access so developers only need the router URL, not individual provider API keys
  • Reduce credential sprawl by keeping all provider secrets in one secure location instead of distributing them
  • Simplify configuration with automatic API key discovery based on configured providers
  • Enable multi-provider setups by managing credentials for multiple LLM providers simultaneously

Quick Start

1

Set your provider API keys

Set environment variables for the providers you want to use:

export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GEMINI_API_KEY="..."
2

Configure your providers

Create ai-gateway-config.yaml with your desired providers:

routers:
  my-router:
    load-balance:
      chat:
        strategy: latency
        providers:
          - openai      # Uses OPENAI_API_KEY
          - anthropic   # Uses ANTHROPIC_API_KEY
          - gemini      # Uses GEMINI_API_KEY
3

Start the gateway

npx @helicone/ai-gateway@latest --config ai-gateway-config.yaml
4

Test secret management

curl -X POST http://localhost:8080/router/my-router/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

✅ The gateway automatically uses the correct API key for whichever provider it routes to!

Storage Options

Cloud secret manager integrations (AWS Secrets Manager, Google Secret Manager, Azure Key Vault, HashiCorp Vault) are coming soon for enterprise deployments.

Use Cases

Use case: Production environment using multiple cloud providers for reliability and cost optimization.

# Set API keys for configured providers
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GEMINI_API_KEY="..."
routers:
  production:
    load-balance:
      chat:
        strategy: latency
        providers: [openai, anthropic, gemini]

How It Works

Request Flow

1

Configuration Read

Gateway reads your configuration and identifies which providers are configured across all routers

2

Environment Variable Discovery

Gateway automatically looks for {PROVIDER_NAME}_API_KEY environment variables for each configured provider

3

Request Arrives

A request comes in and the load balancer selects a provider based on your strategy

4

API Key Validation

Gateway checks if the required API key is available for the selected provider

5

Secure Forwarding

Request is forwarded to the provider with the appropriate API key, keeping credentials secure from the client

Supported Providers

The AI Gateway supports API key management for the following providers:

ProviderEnvironment VariableRequiredNotes
OpenAIOPENAI_API_KEYYesStandard OpenAI API key
AnthropicANTHROPIC_API_KEYYesClaude API key
GeminiGEMINI_API_KEYYesGoogle AI Studio API key
AWS BedrockBEDROCK_API_KEYYesAWS access key
VertexAIVERTEXAI_API_KEYYesGCP service account key
OllamaN/ANoLocal deployment, no key needed

You only need to set environment variables for providers you actually use. If you make a request to a provider without a configured API key, the request will fail with a clear error message.

Error Handling

The AI Gateway provides clear error messages for secret management issues:

Security Best Practices

Credential Isolation

1

Router-Only Access

Keep provider keys in the router infrastructure only - developers and applications never handle actual provider API keys

2

Environment Variable Security

Only the router instances need access to {PROVIDER_NAME}_API_KEY environment variables

3

Client Authentication

Applications authenticate with the router URL instead of individual providers

4

Optional Gateway Authentication

Optionally enable Helicone authentication to require API keys for router access

Observability & Monitoring

Track usage and security through integrated monitoring:

  • Monitor API key usage - Track costs and request traces per provider
  • Audit logs - See which requests used which provider keys
  • Cost alerts - Set up usage monitoring and alerts per provider
  • Request tracing - Full observability through Helicone integration

For complete configuration options and syntax, see the Configuration Reference.

Coming Soon

The following secret management features are planned for future releases:

FeatureDescriptionVersion
AWS Secrets ManagerNative integration with automatic rotation and cross-region replicationv2
Google Secret ManagerGCP-native secret management with IAM integrationv2
Azure Key VaultMicrosoft Azure secret management with enterprise governancev2
HashiCorp VaultEnterprise-grade secret management with dynamic secretsv2