OpenAI Async Logging
This page will show you how to log requests in Helicone when using OpenAI. This does not use the Helicone Proxy. For more information on Async Logging, see the Proxy vs Async page.
1 line integration
Add HELICONE_API_KEY
to your environment variables.
Replace
with
More complex example
1 line integration
Add HELICONE_API_KEY
to your environment variables.
Replace
with
More complex example
Installation and Setup
To get started, install the `helicone-openai` package
Set `HELICONE_API_KEY` as an environment variable
Replace
with
Make a request
Chat, Completion, Embedding, etc usage is equivalent to OpenAI package.
Send feedback
With Async logging, you must retrieve the helicone-id
header from the log response (not LLM response).
HeliconeMeta options
Async logging loses some additional features such as cache, rate limits, and retries
The Helicone Async Log Request API is used for logging requests and responses that go through an endpoint. This is highly useful for auditing, debugging and observing the behavior of your interactions with the system.
Request Structure
A typical request will have the following structure:
Endpoint
Headers
Name | Value |
---|---|
Authorization | Bearer {API_KEY} |
Replace {API_KEY}
with your actual API Key.
Body
The body of the request should follow the HeliconeAyncLogRequest
structure:
Example Usage
Here’s an example using curl:
In the curl command above, replace your_api_key
with your actual API key, and adjust the values in the JSON to fit your actual request and response data and timing.
The response body is a JSON object of the entire response returned by OpenAI, unless it is a streamed request. In that case, it is a JSON object with a key called “streamed_data”, which is an array of every single chunk.