When handling sensitive data or operating under strict compliance requirements, you may need to track LLM metrics without storing the actual request and response content. Omit Logs lets you maintain observability for costs, latency, and usage patterns while excluding sensitive data from storage.
Manage logging preferences with Helicone's Omit Logs feature.

Omit requests, responses or both in your logs.

Why use Omit Logs

  • Maintain compliance: Meet GDPR, HIPAA, and other regulatory requirements by excluding sensitive content from logs
  • Protect sensitive data: Prevent PII, financial data, or proprietary information from being stored in observability systems
  • Keep essential metrics: Still track costs, latency, token usage, and performance without storing actual content
All responses and requests are only stored in memory and never in long-term storage.
There are two ways to omit logs:
  1. Headers: Use this if you’re not using Helicone’s Python Asynchronous Logging.
  2. Helicone Python Async Logging: Use this if you’re using Helicone’s Python Asynchronous Logging and don’t want to send the request or response to our backend.

Headers

Note: This method does not store the request or response but it still sends it to our backend.
To omit logging requests, set Helicone-Omit-Request to true. To omit logging responses, set Helicone-Omit-Response to true.
curl https://oai.helicone.ai/v1/completions \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer YOUR_API_KEY' \
  -H 'Helicone-Omit-Request: true' \ # Add this header and set to true
  -H 'Helicone-Omit-Response: true' \ # Add this header and set to true
 -d '{
    "model": "text-davinci-003",
    "prompt": "How do I enable custom rate limit policies?",
}'

Python Async Logging

If you are using Helicone’s Asynchronous Logging, you have the option to not even send the request or response to our backend.
from helicone_async import HeliconeAsyncLogger
from openai import OpenAI

logger = HeliconeAsyncLogger(
  api_key=HELICONE_API_KEY,
)

logger.init()

client = OpenAI(api_key=OPENAI_API_KEY)

logger.disable_content_tracing() # This will omit the request and response from being sent to our backend till the time you call logger.enable_content_tracing()

# Make the OpenAI call
response = client.chat.completions.create(
  model="gpt-3.5-turbo",
  messages=[{"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the world series in 2020?"},
            {"role": "assistant",
             "content": "The Los Angeles Dodgers won the World Series in 2020."},
            {"role": "user", "content": "Where was it played?"}]
)

print(response.choices[0])
logger.enable_content_tracing() # All future requests will be sent with the request and response

Request View

You can still see the key metrics for each request, without the request or response contents.
View key metrics without logging request or response details with Helicone's Omit Logs feature.

Focus on capturing key metrics and ignore the noise of the request and response.

Completely Disabling Logging

If you are using Helicone’s Asynchronous Logging, you can also completely disable all logging to Helicone:
from helicone_async import HeliconeAsyncLogger
from openai import OpenAI

logger = HeliconeAsyncLogger(
  api_key=HELICONE_API_KEY,
)

logger.init()

# Completely disable all logging
logger.disable_logging()

# Your OpenAI calls here
# No data will be sent to Helicone

# Later, re-enable logging if needed
logger.enable_logging()
When logging is disabled, no traces will be sent to Helicone at all. This is different from disable_content_tracing() which only omits request and response content but still sends other metrics. Note that this feature is only available when using Helicone’s async integration mode and not with the proxy integration.

Request View

You can still see the key metrics for each request, without the request or response contents.