Deploy with Docker
Deploy Helicone AI Gateway using Docker in 2 minutes
Deploy the AI Gateway using Docker for easy containerized deployment to any cloud provider or local environment.
Quick Start
Configure environment
Create a .env
file with your provider API keys. You can use our template as a starting point:
Then edit the .env
file with your actual API keys:
The AI_GATEWAY__SERVER__ADDRESS=0.0.0.0
is required for Docker to work properly.
Run the container
Start the AI Gateway container:
The Gateway will be running on http://localhost:8080
with these routes:
/ai
- Unified API that works out of the box/router/{router-name}
- Custom routing with load balancing/{provider-name}
- Direct provider access
Test your deployment
Make a test request to verify everything works:
You should see a response from the AI model! 🎉
Using a Configuration File
For custom routing and advanced features, create a config.yaml
file:
Create config file
Create a config.yaml
file with your routing configuration:
Mount config and run
Run the container with your configuration file:
Test your router
Test your custom router:
Next Steps
Secure Your Gateway
Secure Your Gateway
Learn how to secure your gateway with authentication and authorization
Deploy to the Cloud
The containerized AI Gateway can be deployed to various cloud platforms: