Helicone is the open-source LLM observability platform for developers to monitor, debug, and improve production-ready applications.
Log your LLM requests, evaluate, and experiment with prompts, and get instant insights that helps you push changes to production with confidence. Helicone is the CI workflow designed for the entire LLM lifecycle.
Welcome to Helicone! Integrate your preferred provider with Helicone in seconds.
Create an account
Once you have an account, proceed to the next step.
Generate an API key
Go to Settings
Click on your organization on the top left, then select Settings
from the dropdown.
Select the API Keys tab
Generate new key
Click on Generate new key
.
When creating a new Helicone API key, you have the ability to enable read
and write
permissions.
Write keys can be used through Helicone via our proxy, feedback or any other Helicone service when calling a POST
or using our gateway.
Keys with read permissions will start with sk-
and keys with write
permissions will start with pk-
. For our EU customers, keys are generated
with the prefix eu-
this allows our edge workers to know which region to
route the request to.
For more details on Helicone API keys, check out the Helicone Auth docs.
Pick your preferred integration method
Select the provider you are using below as the next instruction varies.
Send your first request 🎉
Once we receive your requests, you will see them in the Requests
tab.
You will also see that your Dashboard
has been updated with your new request.
We curated a list to help you make the most of Helicone, but you’re welcome to explore the product on your own!
Label your requests to segment, analyze, and visualize them.
Version your prompt and inputs as they evolve.
Group and visualize multi-step LLM interactions (i.e. AI agents).
Cache and watch how much time and cost you saved.
The following guides are optional, but we think you’ll find them useful.
Discover all the features for monitoring and experimenting with your prompts.
Manage and version prompts from code or UI.
Trace agentic workflows and visualize them.
Instantly react to events, trigger actions, and integrate with external tools.
Label and segment your requests.
Save cost and improve latency.
Evaluate prompt performance and quantify model outputs.
Remove request and responses.
Get insights into your user’s usage.
Curate datasets and fine-tune your LLMs.
Provide user feedback on output.
Utilize any provider through a single endpoint.
Smartly retry requests.
Easily rate-limit power users.
Manage and distribute your provider API keys securely.
Integrate OpenAI moderation to safeguard your chat completions.
Secure OpenAI chat completions against prompt injections.
Set up integrations tp subscribe Helicone events.
Easily manage your customers and their usage.
Determine when you should use a proxy or async function in Helicone.
A detailed breakdown of our process to calculate cost per request.
Every header you need to know to access Helicone features.
Although we designed the docs to be as self-serving as possible, you are welcome to join our Discord or contact help@helicone.ai with any questions or feedback you have.
Interested in deploying Helicone on-prem? Schedule a call with us.
Helicone is the open-source LLM observability platform for developers to monitor, debug, and improve production-ready applications.
Log your LLM requests, evaluate, and experiment with prompts, and get instant insights that helps you push changes to production with confidence. Helicone is the CI workflow designed for the entire LLM lifecycle.
Welcome to Helicone! Integrate your preferred provider with Helicone in seconds.
Create an account
Once you have an account, proceed to the next step.
Generate an API key
Go to Settings
Click on your organization on the top left, then select Settings
from the dropdown.
Select the API Keys tab
Generate new key
Click on Generate new key
.
When creating a new Helicone API key, you have the ability to enable read
and write
permissions.
Write keys can be used through Helicone via our proxy, feedback or any other Helicone service when calling a POST
or using our gateway.
Keys with read permissions will start with sk-
and keys with write
permissions will start with pk-
. For our EU customers, keys are generated
with the prefix eu-
this allows our edge workers to know which region to
route the request to.
For more details on Helicone API keys, check out the Helicone Auth docs.
Pick your preferred integration method
Select the provider you are using below as the next instruction varies.
Send your first request 🎉
Once we receive your requests, you will see them in the Requests
tab.
You will also see that your Dashboard
has been updated with your new request.
We curated a list to help you make the most of Helicone, but you’re welcome to explore the product on your own!
Label your requests to segment, analyze, and visualize them.
Version your prompt and inputs as they evolve.
Group and visualize multi-step LLM interactions (i.e. AI agents).
Cache and watch how much time and cost you saved.
The following guides are optional, but we think you’ll find them useful.
Discover all the features for monitoring and experimenting with your prompts.
Manage and version prompts from code or UI.
Trace agentic workflows and visualize them.
Instantly react to events, trigger actions, and integrate with external tools.
Label and segment your requests.
Save cost and improve latency.
Evaluate prompt performance and quantify model outputs.
Remove request and responses.
Get insights into your user’s usage.
Curate datasets and fine-tune your LLMs.
Provide user feedback on output.
Utilize any provider through a single endpoint.
Smartly retry requests.
Easily rate-limit power users.
Manage and distribute your provider API keys securely.
Integrate OpenAI moderation to safeguard your chat completions.
Secure OpenAI chat completions against prompt injections.
Set up integrations tp subscribe Helicone events.
Easily manage your customers and their usage.
Determine when you should use a proxy or async function in Helicone.
A detailed breakdown of our process to calculate cost per request.
Every header you need to know to access Helicone features.
Although we designed the docs to be as self-serving as possible, you are welcome to join our Discord or contact help@helicone.ai with any questions or feedback you have.
Interested in deploying Helicone on-prem? Schedule a call with us.