Skip to main content
This integration uses the AI Gateway, which provides a unified API for multiple LLM providers. The AI Gateway is currently in beta.

CLI Integration

1

Create an account and generate an API key

Log into Helicone or create an account. Once you have an account, you can generate an API key here.
2

Configure Codex config file

Update your $CODEX_HOME/.codex/config.toml file to include the Helicone provider configuration:
$CODEX_HOME is typically ~/.codex on Mac or Linux.
config.toml
model_provider = "helicone"

[model_providers.helicone]
name = "Helicone"
base_url = "https://ai-gateway.helicone.ai/v1"
env_key = "HELICONE_API_KEY"
wire_api = "chat"
3

Set your Helicone API key

Set the HELICONE_API_KEY environment variable:
export HELICONE_API_KEY=<your-helicone-api-key>
4

Run Codex with Helicone

Use Codex as normal. Your requests will automatically be logged to Helicone:
# If you set model_provider in config.toml
codex "What files are in the current directory?"

# Or specify the provider explicitly
codex -c model_provider="helicone" "What files are in the current directory?"
5

Verify your requests in Helicone

With the above setup, any calls to Codex CLI will automatically be logged and monitored by Helicone. Review them in your Helicone dashboard.

SDK Integration

1

Create an account and generate an API key

Log into Helicone or create an account. Once you have an account, you can generate an API key here.
2

Install the Codex SDK

npm install @openai/codex-sdk
3

Configure the SDK with Helicone

Initialize the Codex SDK with the AI Gateway base URL:
import { Codex } from "@openai/codex-sdk";

const codex = new Codex({
  baseUrl: "https://ai-gateway.helicone.ai/v1",
  apiKey: process.env.HELICONE_API_KEY,
});

const thread = codex.startThread({
  model: "gpt-5" // 100+ models supported
});
const turn = await thread.run("What files are in the current directory?");

console.log(turn.finalResponse);
console.log(turn.items);
The Codex SDK doesn’t currently support specifying the wire API, so it will use the Responses API by default. This works with the AI Gateway with limited model and provider support. See the Responses API documentation for more details.
4

Verify your requests in Helicone

With the above setup, any calls to Codex SDK will automatically be logged and monitored by Helicone. Review them in your Helicone dashboard.

Additional Features

Once integrated with Helicone AI Gateway, you can take advantage of:
  • Unified Observability: Monitor all your Codex usage alongside other LLM providers
  • Cost Tracking: Track costs across different models and providers
  • Custom Properties: Add metadata to your requests for better organization
  • Rate Limiting: Control usage and prevent abuse