Connect Helicone with LiteLLM through proxies for OpenAI, Azure, and Gemini to add logging, monitoring and advanced functionality.
Using Helicone as a proxy allows your LiteLLM requests to flow through Helicone’s infrastructure before reaching the LLM provider. This enables powerful features like:
Unlike callback-based integration, proxy integration works at the network level and can provide more functionality for supported providers.
Helicone offers native proxy support for OpenAI and Azure through LiteLLM. This is the simplest integration method.
Install dependencies
Configure LiteLLM to use the Helicone proxy
Make API calls as usual
Gemini integration requires a special approach because of how the Vertex AI APIs are structured. We need to use a monkey patch with LiteLLM’s Router to correctly route requests through Helicone.
Gemini’s API structure differs from OpenAI’s, and LiteLLM’s default proxy handling doesn’t properly route Gemini requests through Helicone. The patch modifies LiteLLM’s internal URL handling to correctly work with Helicone’s Gemini proxy endpoints.
Install dependencies
Set up environment variables
Configure the LiteLLM Router
Apply the monkey patch
Make API calls
For LLM providers beyond OpenAI, Azure, and Gemini, the integration approach varies by provider:
oai.helicone.ai/v1
)Helicone-Target-Url
header, while others don’tConsult the Helicone documentation for provider-specific proxy endpoints and required headers. If your provider isn’t explicitly supported, reach out to the Helicone team for guidance.
For callback-based integration with LiteLLM, see our LiteLLM Callbacks Integration guide.
Connect Helicone with LiteLLM through proxies for OpenAI, Azure, and Gemini to add logging, monitoring and advanced functionality.
Using Helicone as a proxy allows your LiteLLM requests to flow through Helicone’s infrastructure before reaching the LLM provider. This enables powerful features like:
Unlike callback-based integration, proxy integration works at the network level and can provide more functionality for supported providers.
Helicone offers native proxy support for OpenAI and Azure through LiteLLM. This is the simplest integration method.
Install dependencies
Configure LiteLLM to use the Helicone proxy
Make API calls as usual
Gemini integration requires a special approach because of how the Vertex AI APIs are structured. We need to use a monkey patch with LiteLLM’s Router to correctly route requests through Helicone.
Gemini’s API structure differs from OpenAI’s, and LiteLLM’s default proxy handling doesn’t properly route Gemini requests through Helicone. The patch modifies LiteLLM’s internal URL handling to correctly work with Helicone’s Gemini proxy endpoints.
Install dependencies
Set up environment variables
Configure the LiteLLM Router
Apply the monkey patch
Make API calls
For LLM providers beyond OpenAI, Azure, and Gemini, the integration approach varies by provider:
oai.helicone.ai/v1
)Helicone-Target-Url
header, while others don’tConsult the Helicone documentation for provider-specific proxy endpoints and required headers. If your provider isn’t explicitly supported, reach out to the Helicone team for guidance.
For callback-based integration with LiteLLM, see our LiteLLM Callbacks Integration guide.