Skip to main content

Introduction

The Helicone LLM for LlamaIndex lets you send OpenAI‑compatible requests through the Helicone AI Gateway — no provider keys needed. Gain centralized routing, observability, and control across many models and providers.
This integration uses a dedicated LlamaIndex package: llama-index-llms-helicone.

Install

pip install llama-index-llms-helicone

Usage

from llama_index.llms.helicone import Helicone
from llama_index.llms.openai_like.base import ChatMessage

llm = Helicone(
    api_key="<helicone-api-key>",
    model="gpt-4o-mini",  # works across providers
    is_chat_model=True,
)

message: ChatMessage = ChatMessage(role="user", content="Hello world!")

response = llm.chat(messages=[message])
print(str(response))

Parameters

  • model: OpenAI‑compatible model name routed via Helicone. See the model registry.
  • api_base (optional): Base URL for Helicone AI Gateway (defaults to the package’s DEFAULT_API_BASE). Can also be set via HELICONE_API_BASE.
  • api_key: Your Helicone API key. You can set via constructor or HELICONE_API_KEY.
  • default_headers (optional): Add additional headers; the Authorization: Bearer <api_key> header is set automatically.

Environment Variables

export HELICONE_API_KEY=sk-helicone-...
# Optional override
export HELICONE_API_BASE=https://ai-gateway.helicone.ai/v1

Advanced Configuration

from llama_index.llms.helicone import Helicone

llm = Helicone(
    model="gpt-4.1-mini",
    api_key="<helicone-api-key>",
    api_base="https://ai-gateway.helicone.ai/v1",
    default_headers={
        "Helicone-Session-Id": "demo-session",
        "Helicone-User-Id": "user-123",
        "Helicone-Property-Environment": "production",
    },
    temperature=0.2,
    max_tokens=256,
)
While you’re here, why not give us a star on GitHub? It helps us a lot!

Notes

  • Authentication uses your Helicone API key; provider keys are not required when using the AI Gateway.
  • All requests appear in the Helicone dashboard with full request/response visibility and cost tracking.
  • Learn more about routing and model coverage: