Compare Helicone’s Proxy and Async integration methods. Understand the features, benefits, and use cases for each approach to choose the best fit for your LLM application.
There are two ways to interface with Helicone - Proxy and Async. We will help you decide which one is right for you, and the pros and cons with each option.
Proxy | Async | |
---|---|---|
Easy setup | ✅ | ❌ |
Prompts | ✅ | ✅ |
Prompts Auto Formatting (easier) | ✅ | ❌ |
Custom Properties | ✅ | ✅ |
Bucket Cache | ✅ | ❌ |
User Metrics | ✅ | ✅ |
Retries | ✅ | ❌ |
Custom rate limiting | ✅ | ❌ |
Open-source | ✅ | ✅ |
Not on critical path | ❌ | ✅ |
0 Propagation Delay | ❌ | ✅ |
Negligible Logging Delay | ✅ | ✅ |
Streaming Support | ✅ | ✅ |
The primary reason Helicone users choose to integrate with Helicone using Proxy is its simple integration.
It’s as easy as changing the base URL to point to Helicone, and we’ll forward the request to the LLM and return the response to you.
Proxy: flow of data.
Since the proxy sits on the edge and is the gatekeeper of the requests, you get access to a suite of Gateway tools such as caching, rate limiting, API key management, threat detection, moderations and more.
Here's a simple example
Instead of calling the OpenAI API with api.openai.com
, you will change the URL to a Helicone dedicated domain oai.helicone.ai
.
You can also use the general Gateway URL gateway.helicone.ai
if Helicone doesn’t have a dedicated domain for the provider yet.
Helicone Async allows for a more flexible workflow where the actual logging of the event is not on the critical path. This gives some users more confidence that if we are going down or if there is a network issue that it will not affect their application.
Async: flow of data.
The downside is that we cannot offer the same suite of tools as we can with the proxy.
Choose your LLM provider and get started with Helicone.
Need more help?
Additional questions or feedback? Reach out to help@helicone.ai or schedule a call with us.
Compare Helicone’s Proxy and Async integration methods. Understand the features, benefits, and use cases for each approach to choose the best fit for your LLM application.
There are two ways to interface with Helicone - Proxy and Async. We will help you decide which one is right for you, and the pros and cons with each option.
Proxy | Async | |
---|---|---|
Easy setup | ✅ | ❌ |
Prompts | ✅ | ✅ |
Prompts Auto Formatting (easier) | ✅ | ❌ |
Custom Properties | ✅ | ✅ |
Bucket Cache | ✅ | ❌ |
User Metrics | ✅ | ✅ |
Retries | ✅ | ❌ |
Custom rate limiting | ✅ | ❌ |
Open-source | ✅ | ✅ |
Not on critical path | ❌ | ✅ |
0 Propagation Delay | ❌ | ✅ |
Negligible Logging Delay | ✅ | ✅ |
Streaming Support | ✅ | ✅ |
The primary reason Helicone users choose to integrate with Helicone using Proxy is its simple integration.
It’s as easy as changing the base URL to point to Helicone, and we’ll forward the request to the LLM and return the response to you.
Proxy: flow of data.
Since the proxy sits on the edge and is the gatekeeper of the requests, you get access to a suite of Gateway tools such as caching, rate limiting, API key management, threat detection, moderations and more.
Here's a simple example
Instead of calling the OpenAI API with api.openai.com
, you will change the URL to a Helicone dedicated domain oai.helicone.ai
.
You can also use the general Gateway URL gateway.helicone.ai
if Helicone doesn’t have a dedicated domain for the provider yet.
Helicone Async allows for a more flexible workflow where the actual logging of the event is not on the critical path. This gives some users more confidence that if we are going down or if there is a network issue that it will not affect their application.
Async: flow of data.
The downside is that we cannot offer the same suite of tools as we can with the proxy.
Choose your LLM provider and get started with Helicone.
Need more help?
Additional questions or feedback? Reach out to help@helicone.ai or schedule a call with us.