Helicone’s AI Gateway integrates directly with our prompt management system without the need for custom packages or code changes. This guide shows you how to integrate the AI Gateway with prompt management, not the actual prompt management itself. For creating and managing prompts, see Prompt Management.

Why Use Prompt Integration?

Instead of hardcoding prompts in your application, reference them by ID:
// ❌ Prompt hardcoded in your app
const response = await client.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [
    {
      role: "system", 
      content: "You are a helpful customer support agent for TechCorp. Be friendly and solution-oriented."
    },
    {
      role: "user",
      content: `Customer ${customerName} is asking about ${issueType}`
    }
  ]
});

Gateway vs SDK Integration

Without the AI Gateway, using managed prompts requires multiple steps:
// 1. Install package
npm install @helicone/helpers

// 2. Initialize prompt manager
const promptManager = new HeliconePromptManager({
  apiKey: "your-helicone-api-key"
});

// 3. Fetch and compile prompt (separate API call)
const { body, errors } = await promptManager.getPromptBody({
  prompt_id: "abc123",
  inputs: { customer_name: "John", ... }
});

// 4. Handle errors manually
if (errors.length > 0) {
  console.warn("Validation errors:", errors);
}

// 5. Finally make the LLM call
const response = await openai.chat.completions.create(body);
Why the gateway is better:
  • No extra packages - Works with your existing OpenAI SDK
  • Single API call - Gateway fetches and compiles automatically
  • Lower latency - Everything happens server-side in one request
  • Automatic error handling - Invalid inputs return clear error messages
  • Cleaner code - No prompt management logic in your application

Integration Steps

1

Create prompts in Helicone

Build and test prompts with variables in the dashboard
2

Use prompt_id in your code

Replace messages with prompt_id and inputs in your gateway calls

API Parameters

Use these parameters in your chat completions request to integrate with saved prompts:
prompt_id
string
required
The ID of your saved prompt from the Helicone dashboard
environment
string
default:"production"
Which environment version to use: development, staging, or production
inputs
object
required
Variables to fill in your prompt template (e.g., {"customer_name": "John", "issue_type": "billing"})
model
string
required
Any supported model - works with the unified gateway format

Example Usage

const response = await client.chat.completions.create({
  model: "gpt-4o-mini",
  prompt_id: "customer_support_v2",
  environment: "production",
  inputs: {
    customer_name: "Sarah Johnson",
    issue_type: "billing",
    customer_message: "I was charged twice this month"
  }
});

Next Steps