LiteLLM Integration
Connect Helicone with LiteLLM, a unified interface for multiple LLM providers. Standardize logging and monitoring across various AI models with simple callback or proxy setup.
LiteLLM is a model I/O library to standardize API calls to Azure, Anthropic, OpenAI, etc. Hereβs how you can log your LLM API calls to Helicone from LiteLLM.
Note: Custom
Properties
are available in metadata
starting with LiteLLM version 1.41.23
.
Approach 1: Use Callbacks
1 line integration
Add HELICONE_API_KEY
to your environment variables.
Tell LiteLLM you want to log your data to Helicone
Complete code
Approach 2: [OpenAI + Azure only] Use Helicone as a proxy
Helicone provides advanced functionality like caching, etc. which they support for Azure and OpenAI.
If you want to use Helicone to proxy your OpenAI/Azure requests, then you can -
2 line integration
Complete code
Feel free to check it out and tell us what you think π
Was this page helpful?