These integration methods let you add Helicone’s observability to your existing LLM applications without adopting the AI Gateway. Perfect for teams with established codebases or specific provider requirements.

When to Use These Methods

Existing Codebase

Your application is already built with provider-specific SDKs and refactoring would be costly

Provider Features

You need native SDK features like OpenAI’s complex function calling or Anthropic’s specific formats

Minimal Changes

You want observability without changing your current authentication or request flow

Framework Users

You’re using LangChain, LlamaIndex, or other frameworks that handle LLM calls

Quick Comparison

AspectAlternative IntegrationsAI Gateway
SetupChange endpoint URL only✅ Simple - one endpoint for 100+ models
Code changesMinimal - keep existing SDKMinimal - unified OpenAI format
Switch providers❌ Rewrite for each provider✅ Just change model name
Provider features✅ Full native supportStandard OpenAI format
Observability✅ Full Helicone features✅ Full Helicone features
Fallbacks❌ Manual implementation✅ Automatic
Best forExisting apps, native featuresNew projects, multi-provider

Direct Provider Integrations

Simply change your base URL to add Helicone observability:

Cloud Providers

Speed-Optimized Providers

Framework Integrations

If you’re using an AI framework, add Helicone with minimal configuration:

Custom Integration Options

Async Logging

For zero-latency observability, log requests asynchronously after they complete:

Async Logging

Send logs to Helicone after receiving LLM responses - no proxy latency

Custom HTTP Clients

Any HTTP client can work with Helicone:
curl https://oai.helicone.ai/v1/chat/completions \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Helicone-Auth: Bearer $HELICONE_API_KEY" \
  -d '{"model": "gpt-4", "messages": [...]}'

Implementation Patterns

Still Considering the AI Gateway?

The AI Gateway might be better if you:
  • Want automatic fallbacks between providers
  • Need a unified API for multiple models
  • Are starting a new project
  • Want built-in prompt management

Learn About AI Gateway

See how the AI Gateway simplifies multi-provider LLM applications