This integration uses the AI Gateway, which provides a unified API for multiple LLM providers. The AI Gateway is currently in beta.
CLI Integration
1
2
Configure Codex config file
Update your
$CODEX_HOME/.codex/config.toml file to include the Helicone provider configuration:$CODEX_HOME is typically ~/.codex on Mac or Linux.config.toml
3
Set your Helicone API key
Set the
HELICONE_API_KEY environment variable:4
Run Codex with Helicone
Use Codex as normal. Your requests will automatically be logged to Helicone:
5
SDK Integration
1
2
Install the Codex SDK
3
Configure the SDK with Helicone
Initialize the Codex SDK with the AI Gateway base URL:
The Codex SDK doesn’t currently support specifying the wire API, so it will use the Responses API by default. This works with the AI Gateway with limited model and provider support. See the Responses API documentation for more details.
4
Additional Features
Once integrated with Helicone AI Gateway, you can take advantage of:- Unified Observability: Monitor all your Codex usage alongside other LLM providers
- Cost Tracking: Track costs across different models and providers
- Custom Properties: Add metadata to your requests for better organization
- Rate Limiting: Control usage and prevent abuse
AI Gateway Overview
Learn more about Helicone’s AI Gateway and its features
Responses API Support
Use the OpenAI Responses API format through Helicone AI Gateway
Provider Routing
Configure automatic routing and fallbacks for reliability
Custom Properties
Add metadata to your requests for better tracking and organization