When handling sensitive data or operating under strict compliance requirements, you may need to track LLM metrics without storing the actual request and response content. Omit Logs lets you maintain observability for costs, latency, and usage patterns while excluding sensitive data from storage.
Helicone Python Async Logging: Use this if you’re using Helicone’s Python Asynchronous Logging and don’t want to send the request or response to our backend.
Note: This method does not store the request or response but it still
sends it to our backend.
To omit logging requests, set Helicone-Omit-Request to true.
To omit logging responses, set Helicone-Omit-Response to true.
Copy
Ask AI
curl https://oai.helicone.ai/v1/completions \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer YOUR_API_KEY' \ -H 'Helicone-Omit-Request: true' \ # Add this header and set to true -H 'Helicone-Omit-Response: true' \ # Add this header and set to true -d '{ "model": "text-davinci-003", "prompt": "How do I enable custom rate limit policies?",}'
If you are using Helicone’s Asynchronous Logging, you have the option to not even send the request or response to our backend.
Copy
Ask AI
from helicone_async import HeliconeAsyncLoggerfrom openai import OpenAIlogger = HeliconeAsyncLogger( api_key=HELICONE_API_KEY,)logger.init()client = OpenAI(api_key=OPENAI_API_KEY)logger.disable_content_tracing() # This will omit the request and response from being sent to our backend till the time you call logger.enable_content_tracing()# Make the OpenAI callresponse = client.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, {"role": "user", "content": "Where was it played?"}])print(response.choices[0])logger.enable_content_tracing() # All future requests will be sent with the request and response
If you are using Helicone’s Asynchronous Logging, you can also completely disable all logging to Helicone:
Copy
Ask AI
from helicone_async import HeliconeAsyncLoggerfrom openai import OpenAIlogger = HeliconeAsyncLogger( api_key=HELICONE_API_KEY,)logger.init()# Completely disable all logginglogger.disable_logging()# Your OpenAI calls here# No data will be sent to Helicone# Later, re-enable logging if neededlogger.enable_logging()
When logging is disabled, no traces will be sent to Helicone at all. This is different from disable_content_tracing() which only omits request and response content but still sends other metrics. Note that this feature is only available when using Helicone’s async integration mode and not with the proxy integration.