LiteLLM is a model I/O library to standardize API calls to Azure, Anthropic, OpenAI, etc. Here’s how you can log your LLM API calls to Helicone from LiteLLM using callbacks.
Note: Custom Properties are available in metadata starting with LiteLLM version 1.41.23.
System Instructions Limitation: When using LiteLLM callbacks, system instructions for Gemini and Claude may not appear as "role": "system" messages in Helicone logs. This is because LiteLLM processes the request before sending it to Helicone.For full system instruction support, consider using proxy-based integration instead.

1 line integration

Add HELICONE_API_KEY to your environment variables.
export HELICONE_API_KEY=sk-<your-api-key>
# You can also set it in your code (See below)
Tell LiteLLM you want to log your data to Helicone
litellm.success_callback=["helicone"]

Complete code

from litellm import completion
import os

## set env variables
os.environ["HELICONE_API_KEY"] = "your-helicone-key"
os.environ["OPENAI_API_KEY"], os.environ["COHERE_API_KEY"] = "", ""

# set callbacks
litellm.success_callback=["helicone"]

#openai call
response = completion(
  model="gpt-4o-mini",
  messages=[{"role": "user", "content": "Hi πŸ‘‹ - i'm openai"}],
  metadata={
    "Helicone-Property-Hello": "World"
  }
)

#cohere call
response = completion(
  model="command-r",
  messages=[{"role": "user", "content": "Hi πŸ‘‹ - i'm cohere"}],
  metadata={
    "Helicone-Property-Hello": "World"
  }
)

print(response)
Feel free to check it out and tell us what you think πŸ‘‹ For proxy-based integration with LiteLLM, see our LiteLLM Proxy Integration guide.