Documentation Index
Fetch the complete documentation index at: https://docs.helicone.ai/llms.txt
Use this file to discover all available pages before exploring further.
Installation
Usage
Simple usage with just a prompt ID
With variables
With Helicone properties
In a chat
Supported Providers and Required Environment Variables
Always required:HELICONE_API_KEY
| Provider | Required Environment Variables |
|---|---|
| OpenAI | OPENAI_API_KEY |
| Azure OpenAI | AZURE_API_KEY, AZURE_ENDPOINT, AZURE_DEPLOYMENT |
| Anthropic | ANTHROPIC_API_KEY |
| AWS Bedrock | BEDROCK_API_KEY, BEDROCK_REGION |
| Google Gemini | GOOGLE_GEMINI_API_KEY |
| Google Vertex AI | GOOGLE_VERTEXAI_API_KEY, GOOGLE_VERTEXAI_REGION, GOOGLE_VERTEXAI_PROJECT, GOOGLE_VERTEXAI_LOCATION |
| OpenRouter | OPENROUTER_API_KEY |
API Reference
generate(input)
Generates a response using a Helicone prompt.
Parameters
input(string | object): Either a prompt ID string or a parameters object:promptId(string): The ID of the prompt to use, created in the Prompt Editorversion(number | “production”, optional): The version of the prompt to use. Defaults to “production”inputs(object, optional): Variable inputs to use in the prompt, if anychat(string[], optional): Chat history for chat-based promptsuserId(string, optional): User ID for tracking in HeliconesessionId(string, optional): Session ID for tracking in Helicone Sessionscache(boolean, optional): Whether to use Helicone’s LLM Caching
Returns
Promise<object>: The raw response from the LLM provider