Manual Logger Overview

The Helicone Manual Logger allows you to integrate any custom LLM with Helicone’s observability platform. This approach gives you complete control over the logging process and works with any model, whether it’s a proprietary model you’ve built or an open-source model you’re hosting.

Key Features

  • Universal Compatibility: Works with any LLM, regardless of provider or hosting method
  • Flexible Integration: Multiple implementation options (TypeScript, Python, cURL)
  • Complete Control: You decide exactly what data to log and when
  • Token Tracking: Support for tracking token usage across different model formats
  • Streaming Support: Efficiently log streaming responses
  • Custom Properties: Add metadata to your requests for better analytics

Integration Options

When to Use the Manual Logger

The Manual Logger is ideal for:

  1. Custom Models: When you’re using a model that doesn’t have a direct Helicone integration
  2. Self-Hosted Models: When you’re hosting open-source models like Llama, Mistral, or GPT-Neo
  3. Complex Workflows: When you need fine-grained control over what gets logged and when
  4. Multi-Provider Applications: When your application uses multiple LLM providers

How It Works

The Manual Logger works by:

  1. Capturing Request Data: You capture the request data before sending it to your LLM
  2. Executing the LLM Call: You make the call to your LLM as normal
  3. Capturing Response Data: You capture the response data from your LLM
  4. Logging to Helicone: You send both the request and response data to Helicone

This approach gives you complete control over the logging process while still providing all the benefits of Helicone’s observability platform.

Token Tracking

Helicone supports token tracking for custom model integrations. The Manual Logger can work with various token counting formats:

  • OpenAI-style format (prompt_tokens, completion_tokens, total_tokens)
  • Anthropic-style format (input_tokens, output_tokens)
  • Google-style format (promptTokenCount, candidatesTokenCount, totalTokenCount)
  • Alternative formats (prompt_token_count, generation_token_count)

If your model returns token counts in a different format, you can transform the response to match one of these formats before logging to Helicone.

Next Steps

Choose your preferred integration method:

For more advanced usage examples, check out our Manual Logger with Streaming cookbook.