Manual Logger - Python
Integrate any custom LLM with Helicone using the Python Manual Logger. Step-by-step guide for Python implementation to connect your proprietary or open-source models.
Python Manual Logger
Logging calls to custom models is supported via the Helicone Python SDK.
Install the Helicone helpers package
Set `HELICONE_API_KEY` as an environment variable
Create a new HeliconeManualLogger instance
Define your operation and make the request
API Reference
HeliconeManualLogger
LoggingOptions
log_request
Parameters
request
: A dictionary containing the request parametersoperation
: A callable that takes a HeliconeResultRecorder and returns a resultadditional_headers
: Optional dictionary of additional headersprovider
: Optional provider specification (“openai”, “anthropic”, or None for custom)
send_log
Parameters
provider
: Optional provider specification (“openai”, “anthropic”, or None for custom)request
: A dictionary containing the request parametersresponse
: Either a dictionary or string response to logoptions
: A LoggingOptions dictionary with timing information
HeliconeResultRecorder
Advanced Usage Examples
Direct Logging with String Response
For direct logging of string responses:
Streaming Responses
For streaming responses with Python, you can use the log_request
method with time to first token tracking:
Using with Anthropic
Custom Model Integration
For custom models that don’t have a specific provider integration:
For more examples and detailed usage, check out our Manual Logger with Streaming cookbook.
Direct Stream Logging
For direct control over streaming responses, you can use the send_log
method to manually track time to first token:
This approach gives you complete control over the streaming process while still capturing important metrics like time to first token.
Was this page helpful?